WORLD BANK OPERATIONS EVALUATION DEPARTMENT EVALUATION CAPACITY DEVELOPMENT 24614 MONITORING T H E W O R L D B A N K 1818 H Street, N.W. & Washington, D.C. 20433, U.S.A. Telephone: 202-477-1234 Facsimile: 202-477-6391 Telex: MCI 64145 WORLDBANK MCI 248423 WORLDBANK Internet: www.worldbank.org EVALUATION: Operations Evaluation Department Knowledge Programs and Evaluation Capacity Development Group (OEDKE) E-mail: eline@worldbank.org Telephone: 202-458-4497 Facsimilie: 202-522-3125 SomeTools, Methods & Approaches MONITORING & EVALUATION: SomeTools, Methods & Approaches TheWorld Bank Washington, D.C. www.worldbank.org/oed/ecd/ N Acknowledgments The first edition of this report was prepared by Mari Clark and Rolf Sartorius (Social Impact). A number of World Bank staff who made substantive contributions to its preparation are gratefully acknowledged, including Francois Binder, Osvaldo Feinstein, Ronnie Hammad, Jody Kusek, Linda Morra, Ritva Reinikka, Gloria Rubio and Elizabeth White. This second edition includes an expanded discussion of impact evaluation, prepared by Michael Bamberger (consultant). The task manager for finalization of this report was Keith Mackay. Copyright © 2004 The International Bank for Reconstruction and Development/THE WORLD BANK 1818 H Street, N.W. Washington, D.C. 20433, U.S.A. All rights reserved. Manufactured in the United States of America The opinions expressed in this report do not necessarily represent the views of the World Bank or its member governments. The World Bank does not guar- antee the accuracy of the data included in this publication and accepts no responsibility whatsoever for any consequence of their use. The boundaries, colors, denominations, and any other information shown on any map in this volume do not imply on the part of the World Bank Group any judgement on the legal status of any territory or the endorsement or acceptance of such boundaries. W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 3 Table of Contents M&E Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . .6 The Logical Framework Approach . . . . . . . . . . . . . . . . . . .8 Theory-Based Evaluation . . . . . . . . . . . . . . . . . . . . . . . . .10 Formal Surveys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Rapid Appraisal Methods . . . . . . . . . . . . . . . . . . . . . . . . .14 Participatory Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Public Expenditure Tracking Surveys . . . . . . . . . . . . . . . . .18 Cost-Benefit and Cost-Effectiveness Analysis . . . . . . . . . . .20 Impact Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Additional Resources on Monitoring and Evaluation . . . . .25 W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 5 M&E OVERVIEW: SOME TOOLS, METHODS AND APPROACHES FOR MONITORING AND EVALUATION Monitoring and evaluation (M&E) of develop- PURPOSE ment activities provides government officials, development managers, and civil society with better means for learning from past experience, improving service delivery, planning and allocating resources, and demonstrating results as part of accountability to key stakeholders. Within the development community there is a strong focus on results-- this helps explain the growing interest in M&E. Yet there is often confusion about what M&E entails. The purpose of this M&E Overview is to strengthen awareness and interest in M&E, and to clarify what it entails. You will find an overview of a sample of M&E tools, methods, and approaches outlined here, including their purpose and use; advantages and disadvantages; costs, skills, and time required; and key references. Those illus- trated here include several data collection methods, analytical frameworks, and types of evaluation and review. The M&E Overview discusses: Performance indicators The logical framework approach Theory-based evaluation Formal surveys Rapid appraisal methods Participatory methods Public expenditure tracking surveys Cost-benefit and cost-effectiveness analysis Impact evaluation This list is not comprehensive, nor is it intended to be. Some of these tools and approaches are complementary; some are substitutes. Some have broad applicability, while others are quite narrow in their uses. The choice of which is appropriate for any given context will depend on a range of considerations. These include the uses for which M&E is intended, the main stakeholders who have an interest in the M&E findings, the speed with which the information is needed, and the cost. N 6 Performance Indicators What are they? Performance indicators are measures of inputs, processes, outputs, outcomes, and impacts for development projects, programs, or strategies. When supported with sound data collection--perhaps involving formal surveys--analysis and reporting, indicators enable managers to track progress, demonstrate results, and take corrective action to improve service delivery. Participation of key stakeholders in defining indicators is important because they are then more likely to understand and use indicators for management decision-making. What can we use them for? Setting performance targets and assessing progress toward achieving them. Identifying problems via an early warning system to allow corrective action to be taken. Indicating whether an in-depth evaluation or review is needed. ADVANTAGES: Effective means to measure progress toward objectives. Facilitates benchmarking comparisons between different organizational units, districts, and over time. DISADVANTAGES: Poorly defined indicators are not good measures of success. Tendency to define too many indicators, or those without accessible data sources, making system costly, impractical, and likely to be underutilized. Often a trade-off between picking the optimal or desired indicators and having to accept the indicators which can be measured using existing data. COST: Can range from low to high, depending on number of indicators collected, the fre- quency and quality of information sought, and the comprehensiveness of the system. W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 7 SKILLS REQUIRED: Several days of training are recommended to develop skills for defining practical indicators. Data collection, analysis and reporting skills, and management information system (MIS) skills are required to implement performance monitoring systems. TIME REQUIRED: Several days to several months, depending on extent of participatory process used to define indicators and program complexity. Implementing performance monitoring systems may take 6­12 months. F O R M O R E I N F O R M A T I O N : World Bank (2000). Key Performance Indicator Handbook. Washington, D.C. Hatry, H. (1999). Performance Measurement: Getting Results. The Urban Institute,Washington, D.C. N 8 The Logical Framework Approach What is it? The logical framework (LogFrame) helps to clarify objectives of any project, program, or policy. It aids in the identification of the expected causal links--the "program logic"--in the following results chain: inputs, processes, outputs (including coverage or "reach" across beneficiary groups), outcomes, and impact. It leads to the identification of performance indicators at each stage in this chain, as well as risks which might impede the attainment of the objectives. The LogFrame is also a vehicle for engaging partners in clarifying objectives and designing activities. During implementation the LogFrame serves as a useful tool to review progress and take corrective action. What can we use it for? Improving quality of project and program designs--by requiring the specification of clear objectives, the use of performance indicators, and assessment of risks. Summarizing design of complex activities. Assisting the preparation of detailed operational plans. Providing objective basis for activity review, monitoring, and evaluation. ADVANTAGES: Ensures that decision-makers ask fundamental questions and analyze assumptions and risks. Engages stakeholders in the planning and monitoring process. When used dynamically, it is an effective management tool to guide implementa- tion, monitoring and evaluation. DISADVANTAGES: If managed rigidly, stifles creativity and innovation. If not updated during implementation, it can be a static tool that does not reflect changing conditions. Training and follow-up are often required. W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 9 COST: Low to medium, depending on extent and depth of participatory process used to support the approach. SKILLS REQUIRED: Minimum 3­5 days training for facilitators; additional facilitation skills required for use in participatory planning and management. TIME REQUIRED: Several days to several months, depending on scope and depth of participatory process. F O R M O R E I N F O R M A T I O N : World Bank (2000).The Logframe Handbook, World Bank: http://wbln1023/OCS/Quality.nsf/Main/MELFHandBook/$File/LFhandbook.pdf GTZ (1997). ZOPP: Objectives-Oriented Project Planning: http://www.unhabitat.org/cdrom/governance/html/books/zopp_e.pdf N 10 Theory-Based Evaluation What is it? Theory-based evaluation has similarities to the LogFrame approach but allows a much more in-depth understanding of the workings of a program or activity--the "program theory" or "program logic." In particular, it need not assume simple linear cause-and- effect relationships. For example, the success of a government program to improve liter- acy levels by increasing the number of teachers might depend on a large number of fac- tors. These include, among others, availability of classrooms and textbooks, the likely reactions of parents, school principals and schoolchildren, the skills and morale of teach- ers, the districts in which the extra teachers are to be located, the reliability of govern- ment funding, and so on. By mapping out the determining or causal factors judged important for success, and how they might interact, it can then be decided which steps should be monitored as the program develops, to see how well they are in fact borne out. This allows the critical success factors to be identified. And where the data show these factors have not been achieved, a reasonable conclusion is that the program is less likely to be successful in achieving its objectives. What can we use it for? Mapping design of complex activities. Improving planning and management. ADVANTAGES: Provides early feedback about what is or is not working, and why. Allows early correction of problems as soon as they emerge. Assists identification of unintended side-effects of the program. Helps in prioritizing which issues to investigate in greater depth, perhaps using more focused data collection or more sophisticated M&E techniques. Provides basis to assess the likely impacts of programs. DISADVANTAGES: Can easily become overly complex if the scale of activities is large or if an exhaustive list of factors and assumptions is assembled. Stakeholders might disagree about which determining factors they judge important, which can be time-consuming to address. W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 11 COST: Medium--depends on the depth of analysis and especially the depth of data collection undertaken to investigate the workings of the program. SKILLS REQUIRED: Minimum 3­5 days training for facilitators. TIME REQUIRED: Can vary greatly, depending on the depth of the analysis, the duration of the program or activity, and the depth of the M&E work undertaken. F O R M O R E I N F O R M A T I O N : Weiss, Carol H. (1998). Evaluation. Prentice Hall, New Jersey, Second Edition. Weiss, Carol H. (2000). "Theory-based evaluation: theories of change for poverty reduction programs." In O. Feinstein and R. Picciotto (eds.), Evaluation and Poverty Reduction. Operations Evaluation Department,TheWorld Bank,Washington, D.C. Mayne, John (1999). Addressing AttributionThrough Contribution Analysis: Using Performance Measures Sensibly. Office of the Auditor General of Canada working paper, Ottawa: http://www.oag-bvg.gc.ca/domino/other.nsf/html/99dp1_e.html N 12 Formal Surveys What are they? Formal surveys can be used to collect standardized information from a carefully selected sample of people or households. Surveys often collect comparable information for a relatively large number of people in particular target groups. What can we use them for? Providing baseline data against which the performance of the strategy, program, or project can be compared. Comparing different groups at a given point in time. Comparing changes over time in the same group. Comparing actual conditions with the targets established in a program or project design. Describing conditions in a particular community or group. Providing a key input to a formal evaluation of the impact of a program or project. Assessing levels of poverty as basis for preparation of poverty reduction strategies. ADVANTAGES: Findings from the sample of people interviewed can be applied to the wider target group or the population as a whole. Quantitative estimates can be made for the size and distribution of impacts. DISADVANTAGES: With the exception of CWIQ, results are often not available for a long period of time. The processing and analysis of data can be a major bottleneck for the larger surveys even where computers are available. LSMS and household surveys are expensive and time-consuming. Many kinds of information are difficult to obtain through formal interviews. COST: Ranges from roughly $30­60 per household for the CWIQ to $170 per household for the LSMS. Costs will be significantly higher if there is no master sampling frame for the country. SKILLS REQUIRED: Sound technical and analytical skills for sample and questionnaire design, data analysis, and processing. W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 13 TIME REQUIRED: Depends on sample size. The CWIQ can be completed in 2 months. The LSMS generally requires 18 months to 2 years. Some Types of Survey and, when repeated annually, for moni- toring activity performance. Preliminary Multi-Topic Household Survey (also results can be obtained within 30 days known as Living Standards Measurement of the CWIQ survey. Survey--LSMS) is a multi- subject integrated survey that provides a Client Satisfaction (or Service Delivery) means to gather data on a number of Survey is used to assess the performance aspects of living standards to inform policy. of government services based on client These surveys cover: spending, household experience. The surveys shed light on the composition, education, health, employ- constraints clients face in accessing public ment, fertility, nutrition, savings, agricul- services, their views about the quality and tural activities, other sources of income. adequacy of services, and the responsive- Single-topic household surveys cover a ness of government officials. These surveys narrower range of issues in more depth. are usually conducted by a government ministry or agency. Core Welfare Indicators Question- naire (CWIQ) is a household survey Citizen Report Cards have been con- that measures changes in social indica- ducted by NGOs and think-tanks in tors for different population groups-- several countries. Similar to service specifically indicators of access, utiliza- delivery surveys, they have also in- tion, and satisfaction with social and vestigated the extent of corruption economic services. It is a quick and encountered by ordinary citizens. A effective tool for improving activity notable feature has been the widespread design, targeting services to the poor publication of the findings. F O R M O R E I N F O R M A T I O N : Sapsford, R. (1999). Survey Research. Sage, Newbury Park, CA. CoreWelfare Indicators Questionnaire: http://www4.worldbank.org/afr/stats/cwiq.cfm LSMS: http://www.worldbank.org/lsms/ Client Satisfaction Surveys: http://www4.worldbank.org/afr/stats/wbi.cfm#sds Citizen Report Cards: http://lnweb18.worldbank.org/ESSD/sdvext.nsf/60ByDocName/ CitizenReportCardSurveysANoteontheConceptandMethodology/$FILE/CRC+SD+note.pdf N 14 Rapid Appraisal Methods What are they? Rapid appraisal methods are quick, low-cost ways to gather the views and feedback of beneficiaries and other stakeholders, in order to respond to decision-makers' needs for information. What can we use them for? Providing rapid information for management decision-making, especially at the project or program level. Providing qualitative understanding of complex socioeconomic changes, highly interactive social situations, or people's values, motivations, and reactions. Providing context and interpretation for quantitative data collected by more formal methods. ADVANTAGES: Low cost. Can be conducted quickly. Provides flexibility to explore new ideas. DISADVANTAGES: Findings usually relate to specific communities or localities--thus difficult to generalize from findings. Less valid, reliable, and credible than formal surveys. COST: Low to medium, depending on the scale of methods adopted. SKILLS REQUIRED: Non-directive interviewing, group facilitation, field observation, note-taking, and basic statistical skills. TIME REQUIRED: Four to six weeks, depending on the size and location of the population interviewed and the number of sites observed. W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 15 Rapid Appraisal Methods Key informant interview--a series of Community group interview--a series open-ended questions posed to individ- of questions and facilitated discussion uals selected for their knowledge and in a meeting open to all community experience in a topic of interest. Inter- members. The interviewer follows a views are qualitative, in-depth, and carefully prepared questionnaire. semi-structured. They rely on interview guides that list topics or questions. Direct observation--use of a detailed observation form to record what is seen Focus group discussion--a facilitated and heard at a program site. The infor- discussion among 8­12 carefully mation may be about ongoing activi- selected participants with similar back- ties, processes, discussions, social inter- grounds. Participants might be benefi- actions, and observable results. ciaries or program staff, for example. The facilitator uses a discussion guide. Mini-survey--a structured question- Note-takers record comments and naire with a limited number of close- observations. ended questions that is administered to 50­75 people. Selection of respondents may be random or `purposive' (inter- viewing stakeholders at locations such as a clinic for a health care survey). F O R M O R E I N F O R M A T I O N : USAID. Performance Monitoring and EvaluationTips, #s 2, 4, 5, 10: http://www.usaid.gov/pubs/usaid_eval/#02 K. Kumar (1993). Rapid Appraisal Methods.TheWorld Bank,Washington, D.C. N 16 Participatory Methods What are they? Participatory methods provide active involvement in decision-making for those with a stake in a project, program, or strategy and generate a sense of ownership in the M&E results and recommendations. What can we use them for? Learning about local conditions and local people's perspectives and priorities to design more responsive and sustainable interventions. Identifying problems and trouble-shooting problems during implementation. Evaluating a project, program, or policy. Providing knowledge and skills to empower poor people. ADVANTAGES: Examines relevant issues by involving key players in the design process. Establishes partnerships and local ownership of projects. Enhances local learning, management capacity, and skills. Provides timely, reliable information for management decision-making. DISADVANTAGES: Sometimes regarded as less objective. Time-consuming if key stakeholders are involved in a meaningful way. Potential for domination and misuse by some stakeholders to further their own interests. COST: Low to medium. Costs vary greatly, depending on scope and depth of application and on how local resource contributions are valued. SKILLS REQUIRED: Minimum several days' training for facilitators. W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 17 TIME REQUIRED: Varies greatly, depending on scope and depth of application. Commonly Used Participatory Tools Stakeholder analysis is the starting often using visual techniques so that point of most participatory work and non-literate people can participate. social assessments. It is used to develop an understanding of the power relation- Beneficiary assessment involves sys- ships, influence, and interests of the tematic consultation with project bene- various people involved in an activity ficiaries and other stakeholders to iden- and to determine who should partici- tify and design development initiatives, pate, and when. signal constraints to participation, and provide feedback to improve services Participatory rural appraisal is a and activities. planning approach focused on sharing learning between local people, both Participatory monitoring and evalua- urban and rural, and outsiders. It tion involves stakeholders at different enables development managers and levels working together to identify local people to assess and plan appro- problems, collect and analyze informa- priate interventions collaboratively tion, and generate recommendations. F O R M O R E I N F O R M A T I O N : Guijt, I. and J. Gaventa (1998). Participatory Monitoring and Evaluation. Institute of Development Studies, University of Sussex, Brighton, U.K.: http://www.ids.ac.uk/ids/bookshop/briefs/brief12.html http://www.worldbank.org/participation/partme.htm N 18 Public Expenditure Tracking Surveys What are they? Public expenditure tracking surveys (PETS) track the flow of public funds and determine the extent to which resources actually reach the target groups. The surveys examine the manner, quantity, and timing of releases of resources to different levels of government, particularly to the units responsible for the delivery of social services such as health and education. PETS are often implemented as part of larger service delivery and facility surveys which focus on the quality of service, characteristics of the facilities, their management, incentive structures, etc. What can we use them for? Diagnosing problems in service delivery quantitatively. Providing evidence on delays, "leakage," and corruption. ADVANTAGES: Supports the pursuit of accountability when little financial information is available. Improves management by pinpointing bureaucratic bottlenecks in the flow of funds for service delivery. DISADVANTAGES: Government agencies may be reluctant to open their accounting books. Cost is substantial. COST: Can be high until national capacities to conduct them have been established. For example, the first PETS in Uganda cost $60,000 for the education sector and $100,000 for the health sector. W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 19 SKILLS REQUIRED: Sound technical and analytical skills for sample and questionnaire design, data analysis and processing, and good understanding of sector to be assessed. TIME REQUIRED: Five to six months (survey alone takes 1­2 months). F O R M O R E I N F O R M A T I O N : http://www1.worldbank.org/publicsector/pe/trackingsurveys.htm N 20 Cost-Benefit and Cost-Effectiveness Analysis What are they? Cost-benefit and cost-effectiveness analysis are tools for assessing whether or not the costs of an activity can be justified by the outcomes and impacts. Cost-benefit analysis measures both inputs and outputs in monetary terms. Cost-effectiveness analysis esti- mates inputs in monetary terms and outcomes in non-monetary quantitative terms (such as improvements in student reading scores). What can we use them for? Informing decisions about the most efficient allocation of resources. Identifying projects that offer the highest rate of return on investment. ADVANTAGES: Good quality approach for estimating the efficiency of programs and projects. Makes explicit the economic assumptions that might otherwise remain implicit or overlooked at the design stage. Useful for convincing policy-makers and funders that the benefits justify the activity. DISADVANTAGES: Fairly technical, requiring adequate financial and human resources available. Requisite data for cost-benefit calculations may not be available, and projected results may be highly dependent on assumptions made. Results must be interpreted with care, particularly in projects where benefits are difficult to quantify. COST: Varies greatly, depending on scope of analysis and availability of data. W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 21 SKILLS REQUIRED: The procedures used in both types of analyses are often highly technical. They require skill in economic analysis and availability of relevant economic and cost data. TIME REQUIRED: Varies greatly depending on scope of analysis and availability of data. F O R M O R E I N F O R M A T I O N : Belli, P., et al. (2000). Economic Analysis of Investment Operations: AnalyticalTools and Practical Applications.TheWorld Bank,Washington, D.C. GOOD PRACTICE EXAMPLES OF COST-BENEFIT ANALYSIS: http://kms.worldbank.org/edunet/TEN_DIMENSIONS/DIM_4/cb_ce.htm N 22 Impact Evaluation What is it? Impact evaluation is the systematic identification of the effects ­ positive or negative, intended or not ­ on individual households, institutions, and the environment caused by a given development activity such as a program or project. Impact evaluation helps us better understand the extent to which activities reach the poor and the magnitude of their effects on people's welfare. Impact evaluations can range from large scale sample surveys in which project populations and control groups are compared before and after, and possibly at several points during program intervention; to small-scale rapid assess- ment and participatory appraisals where estimates of impact are obtained from com- bining group interviews, key informants, case studies and available secondary data. What can we use it for? Measuring outcomes and impacts of an activity and distinguishing these from the influence of other, external factors. Helping to clarify whether costs for an activity are justified. Informing decisions on whether to expand, modify or eliminate projects, programs or policies. Drawing lessons for improving the design and management of future activities. Comparing the effectiveness of alternative interventions. Strengthening accountability for results. ADVANTAGES: Provides estimates of the magnitude of outcomes and impacts for different demo- graphic groups, regions or over time. Provides answers to some of the most central development questions ­ to what extent are we making a difference? What are the results on the ground? How can we do better? Systematic analysis and rigor can give managers and policy-makers added confidence in decision-making. DISADVANTAGES: Some approaches are very expensive and time-consuming, although faster and more economical approaches are also used. Reduced utility when decision-makers need information quickly. Difficulties in identifying an appropriate counter-factual. W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 23 COST: A number of World Bank impact evaluations have ranged from $200,000 - $900,000 depending on program size, complexity and data collection. Simpler and rapid impact evaluations can be conducted for significantly less that $100,000 and in some cases for as little as $10,000 - $20,000. SKILLS REQUIRED: Strong technical skills in social science research design, management, analysis and reporting. Ideally, a balance of quantitative and qualitative research skills on the part of the evaluation team. TIME REQUIRED: Can take up to 2 years or more. Rapid assessment evaluations can often be conducted in less than 6 months. EXAMPLES OF IMPACT EVALUATION DESIGNS Randomized evaluation designs, involving the collection of information on project and control groups at two or more points in time, provide the most rigorous statistical analysis of project impacts and the contribution of other factors. But in practice it is rarely possible to use these designs for reasons of cost, time, methodological or ethical constraints. Thus most impact evaluations use less expensive and less rigorous evalua- tion designs. The following table describes four approaches to impact evaluation designs in development evaluation. The first is an example of a randomized evaluation design; the second is a quasi-experimental design in which a "non-equivalent" control group is selected to match as closely as possible the characteristics of the project popu- lation; in the third example the project population is compared with a non-equivalent control group after the project has been implemented; and the fourth is a rapid assess- ment evaluation which combines group interviews, key informants, case studies and secondary data. Each successive model sacrifices methodological rigor, in return from which there are significant reductions in cost and time requirements. F O R M O R E I N F O R M A T I O N : Baker, J. (2000). Evaluating the Poverty Impact of Projects: A Handbook for Practitioners. TheWorld Bank,Washington, D. C. http://www.worldbank.org/poverty/library/impact.htm World BankWeb site on impact evaluation: http://www.worldbank.org/poverty/impact/ Roche, C. (1999) Impact Assessment for Development Agencies: Learning to Value Change. Oxfam, Oxford. N 24 4 Models of Impact Evaluation Indicative cost and Model Design Example time 1. Randomized Subjects (families, schools, communities Water supply and san- 1-5 years depending on pre-test post-test etc) are randomly assigned to project itation or the provi- time which must elapse evaluation. and control groups. Questionnaires or sion of other services before impacts can be other data collection instruments such as housing, com- observed. Cost can range (anthropometric measures, school per- munity infrastructure from $50,000 - $1million formance tests, etc) are applied to both etc where the demand depending on the size and groups before and after the project inter- exceeds supply and complexity of the program vention. Additional observations may beneficiaries are being studied. also be made during project implemen- selected by lottery. tation. Example: Bolivia Social Fund. 2. Quasi-experimen- Where randomization is not possible, a These models have Cost and timing similar to tal design with before control group is selected which matches been applied in World Model 1. and after compar- the characteristics of the project group as Bank low-cost hous- isons of project and closely as possible. Sometimes the types ing programs in El control populations. of communities from which project par- Salvador, Zambia, ticipants were drawn will be selected. Senegal and the Where projects are implemented in sev- Philippines. eral phases, participants selected for sub- sequent phases can be used as the con- trol for the first phase project group. 3. Ex-post compari- Data are collected on project beneficiar- Assessing the impacts $50,000 upwards. The cost son of project and ies and a non-equivalent control group is of micro-credit pro- will usually be one third to non-equivalent selected as for Model 2. Data are only grams in Bangladesh. one half of a comparable control group. collected after the project has been Villages where micro- study using Models 1 or 2. implemented. Multivariate analysis is credit programs were often used to statistically control for dif- operating were com- ferences in the attributes of the two pared with similar vil- groups. lages without these credit programs. 4. Rapid assessment Some evaluations only study groups Assessing community $25,000 upwards (the ex-post impact affected by the project while others managed water supply Indonesia study cost evaluations. include matched control groups. Partici- projects in Indonesia. $150,000). Some studies patory methods can be used to allow are completed in 1-2 groups to identify changes resulting months; others take a year from the project, who has benefited and or longer. who has not, and what were the project's strengths and weaknesses. Triangulation is used to compare the group informa- tion with the opinions of key informants and information available from second- ary sources. Case studies on individuals or groups may be produced to provide more in-depth understanding of the processes of change. W O R L D B A N K O P E R A T I O N S E V A L U A T I O N D E P A R T M E N T E V A L U A T I O N C A P A C I T Y D E V E L O P M E N T 25 Additional Resources on Monitoring and Evaluation WorldWideWeb sites World Bank Evaluation, Monitoring and Quality Enhancement: http://www.worldbank.org/evaluation/ Monitoring & Evaluation Capacity Development: http://www.worldbank.org/oed/ecd/ Monitoring and Evaluation News: http://www.mande.co.uk/