AFRICAN DEVELQPMENT BANK & WORLD BANK OPERATIONS EVALUATION DEPARTMENTS 19814 Evaluation November 1998 Capacity l;-DeXvelopment in Africa Selected Proceedings from a Seminar in Abidjan k   I AFRICAN DEVELOPMENT BANK & WORLD BANK OPERATIONS EVALUATION DEPARTMENTS Monitoring & Evaluation Capacity Development in Africa Abidjan, November 1998 Contents Acknowledgments ................................................... v Foreword ................................................... vii Abbreviations and Acronyms ............................. ...................... ix Executive Summary .................................................. xi Part l: Perspectives on Monitoring and Evaluation in Africa ......... ...... 1 1.Welcome Message ...................................................3 AhmedBahgat 2. Opening Statement ...................................................7 Tidjane Thiam 3.The Goals of the Workshop .................................................. 11 GabrielM.B. Karfisa 4. Introductory Statement .................................................. 17 NieksDabelstein 5. Evaluation Capacity Development: Lessons Learned by the UNDP ... 19 ArildO. Hauge 6. Evaluation Capacity Development: Issues and Challenges ............. ....... 39 RobertPicciotto Part 11: Experiences in Evaluation Capacity Development ......... ....... 47 7. Evaluation Development in Morocco: Approach Through Training .................. ................................ 49 KhaddourTahriandAbderrahmane Haouach 8. Monitoring and Evaluation: The Case for Zimbabwe .................... ........... 63 MollyMandinyenya 9. Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana ....................... ............................ 79 .saacAdams 10.The Agricultural Development M&E System in Morocco: Objectives, Content, and Implementation Status .................... ........... 103 AbdelazizBelouafi 11. M&E Capacity Building in Uganda, with a Focus on Public Expenditure .................................................. 117 Michael Wamibu Contents iii 12.The M&E System and Responsibilities in the Republic of Guinea .... 125 MamadouBah 13.The Link Between M&E and Decisionmaking in Zimbabwe .............. 131 MichaelS. Sibanda 14. Strengthening Evaluation Capacity in the Latin American and Caribbean Region ................................................. 139 Jean S. Quesnel Part III: Strategies and Resources for Building Evaluation Capacity ............................................... 157 15. Development Through Institutions: A Review of Institutional Development Strategies in Norwegian Bilateral Aid ....................... 159 Stein-ErikKruse 16. Project Implementation M&E: Ghana's Experience ................................ 187 HuduSiita 17. Conducting Joint Sector/Thematic Evaluations ..................................... 203 Niels Dabeistein 18. Summary of Presentations and Discussions ............................................. 207 Part IV: Options for Evaluation Capacity Development .................. 217 19. Preparing an Action Plan for Evaluation Capacity Development ... 219 Keith MackayandRayRist 20. Options for Evaluation Capacity Development ...................................... 229 PartV: Challenges and Prospects ................................................. 233 21.Wrap-Up Session ................................................. 235 GabrielM.B. Kariisa 22.The Road Ahead ................................................. 241 RobertPicciotto Annexes ................................................. 243 List of Participants ................................................. 245 Summary Version of Agenda ................................................. 259 Statement by Switzerland .................................................. 267 Catherine Cudre-Mauroux iv Table of Contents Acknowledgements The Regional Seminar and Workshop, Monitoring and Evaluation Capacity Development in Africa, held in Abidjan, 16-19 November 1998, brought together delegates from some 12 African countries, donors, and private sector professionals. For some time now, public sector reform, public expenditure control, and evaluation capacity development have attracted much attention from developing country governments and donors alike. Conferences, meetings, and training sessions have been held in Asia, Latin America, and the Caribbean. In 1990 the first of these conferences had been held in Abidjan; eight years later it was time to take stock of what African governments and donors had accomplished. The sponsors for this seminar-workshop were: * The African Development Bank Group (OPEV/CADI) * The World Bank Group (OED/EDI-LLC) * United Nations Development Programme (UNDP) * Danish International Development Assistance (DANIDA) * Norway's Royal Ministry of Foreign Affairs * Swiss Agency for Development Cooperation (SDC) * Swedish International Development Cooperation Agency (SIDA). Special thanks go to M. Manai and Linda Morra for their sterling work in organizing the seminar and workshop, and to Raymond Gervais for editorial assistance. Others who provided invaluable assistance in preparing the agenda are also thanked, including Jaime Biderman, Niels Dabelstein, Gabriel Kariisa, Robert Picciotto, Osvaldo Feinstein, and Ray Rist. This study was produced in the Partnerships and Knowledge Group (OEDPK) by the Dissemination and Outreach Unit. The unit is directed by Elizabeth Campbell-Page, Task Manager, and includes Caroline McEuen and Leo Demesmaker (editors), Kathy Strauss and Lunn Lestina (desktop design and layout), and Juicy Qureishi-Huq (administrative assistance). Acknowledgements v Foreword The organization of the 1998 seminar was a product of a long series of activities that spanned over a decade of meetings, missions, contacts, and attempts by national governments in Africa-indeed, throughout the developing world-to develop monitoring and evaluation (M&E) capacities. In March 1987, a seminar intended for beneficiary countries was held under the auspices of the Development Assistance Committee (DAC) of the Organization for Economic Cooperation and Develop- ment (OECD). The seminar, which offered donors and beneficiaries the opportunity to exchange views on the objectives, means, and experiences in evaluation, helped to highlight the need to strengthen the evaluation capacities of developing countries. The summary report of the discussions, Evaluation in Developing Countries: A Step Towards Dialogue, was published by the OECD in 1988. As part of the follow-up actions, it was then agreed among the DAC Group of Experts on aid evaluation that a series of seminars would be planned, at the regional level, to intensify the dialogue, discuss problems unique to a given region, and recommend concrete and specific actions with a view to strengthening the evaluation capaci- ties of developing countries. Consequently, a proposal aimed at organizing a seminar on evaluation in Africa, presented jointly by the African Development Bank (ADB) and DAC, was approved by the OECD in 1989. These decisions were to bring about the first seminar in Abidjan, C6te d'Ivoire, 2-4 May 1990. The period between the two regional seminars held in Africa saw the organization of other regional seminars in Asia and Latin America. Lessons have been identified and have served as a source of ideas for the organizers of this seminar. The experience of the various development banks, and the evalua- tion community in general, shows that the success or failure of ECD depends on three conditions: vii 1. The awareness and appreciation at the governmental decisionmaking level of the importance and necessity of evaluation-in other words, the existence of demand for evaluation 2. The institutionalization and meaningful integration of the various evaluation functions in the government machinery at the national, sectoral, program/project, and provincial levels 3. The development of human and financial resources to support a professional, dedicated, and effective cadre of evaluators and evaluation managers. Evaluation capacity in a country rests on a real demand for evalua- tion to address real information needs, on appropriate institutional structures that are to respond to such needs, and on the capability of evaluation personnel, given the necessary resources, to provide the information needed in a responsive, professional, and timely fashion. The objectives set for the 1998 Abidjan seminar stem from all these elements. One was to provide an overview of progress with evalua- tion capacity development in Africa, including the sharing of lessons of experience. Another was to build consensus on the purposes and elements of M&E in support of development. A third objective was to identify strategies and resources for building M&E supply and demand in African countries. A fourth was to help country teams, representing 12 African countries, to develop preliminary action plans for developing M&E systems in their countries. A final objective was to support the creation of country and regional networks to encourage follow-on work. It is hoped that the discussions presented in this report will further the interest in M&E and will lead to the construction of robust and sustainable M&E systems. viii Abbreviations and Acronyms ACBF African Capacity Building DESS Dipl6me d'Enseignement Foundation Superieur Specialise ACGP Large-Scale Projects and DPA Provincial Directorates for Administration (Guinea) Agriculture (Morocco) ADB African Development Bank DPAE Directorate of Programming and ARDA Agricultural and Rural Develop- Economic Affairs (Morocco) ment Authority (Zimbabwe) DPCU District Planning Coordinating BEPS Borrowers' Ex Post Evaluation Unit Reports DREF Regional Directorates of Water BES Bank Evaluation System (IADB) and Forestry Resources BNETD National Bureau for Technical (Morocco) Studies and Development ECB Evaluation Capacity Building (Cote d'Ivoire) ECD Evaluation Capacity Development CAG Comptroller and Auditor General EDI-LLC CARD Coordinated Agricultural and Economic Development Institute- Rural Development Learning and Leadership Center CAS Country Assistance Strategy (World Bank) CCD Cabinet Committee on EVO Evaluation Office (IADB) Development (Zimbabwe) GHA Ghana Highway Authority CIPP Project Preselection GLSS Ghana Living Standards Survey Interministeral Committee IADB Inter-American Development (Morocco) Bank CNEP National Center for Program IRDEP Integrated Rural Development Evaluation (Morocco) (Zimbabwe) CPS Country Partnership Strategy M&E Monitoring & Evaluation CWIQ Core Welfare, Questionnaire MDA Ministries, Departments, and Indicators Survey Agencies DAC Development Assistance MID Monitoring and Implementation Committee Division, Office of the President DANIDA and Cabinet (Zimbabwe) Danish International MIDEPLAN Development Assistance Ministry of Planning (Costa Rica) DEPP Directorate of Public MOF Ministry of Finance Establishments and Participation MTFF Medium-Term Expenditures (Morocco) Framework (Guinea) ix NDPC National Development Planning PSC Public Service Commission Commission (Ghana) PSIP Public Sector Investment NEI Netherlands Economic Institute Program NEPC National Economic Planning PWD Public Works Department Commission (Zimbabwe) RBM Results-Based Management NGO Nongovernmental Organization RCC Regional Coordinating Councils NPA National Planning Agency RPCU Regional Planning Coordinating (Zimbabwe) Unit NSDS National Service Delivery Survey SDC Swiss Agency for Development (Uganda) Cooperation NWP National Working Group SHD Sustainable Human Development (Zimbabwe) SIDA Swedish International OECD Organization for Economic Development Cooperation Cooperation and Development Agency OED Operations Evaluation SINE National Evaluation System Department (World Bank) (Costa Rica) OEO Operation Evaluation Officer SSE M&E Service (Guinea) (IADB) SSEDA Agricultural Development ORMVA Regional Offices for Agricultural Monitoring and Evaluation Development (Morocco) System (Morocco) PAC Public Accounts Committee TOR Terms of Reference (Zimbabwe) TQM Total Quality Management PCR Project Completion Report UNDP United Nations Development PIP Public Investment Program Programme PMEI Planning, Monitoring, Evaluation, WID Women in Development and Implementation WPAE Working Party on Aid Evaluation PNAP National Training Program in (DAC) Project Analysis and Management ZIMPREST (Morocco) Zimbabwe Programme on PPME Policy Planning Monitoring and Economic and Social Evaluation Transformation PPR Project Performance Review x Executive Summary The first seminar on evaluation in Africa, presented jointly by the African Development Bank (ADB) and DAC (Development Assis- tance Committee), was approved by the Organization for Economic Cooperation and Development (OECD) in 1989. It was held in Abidjan, Cote d'Ivoire, 2-4 May 1990. Its objectives included the clarification of evaluation needs as perceived by African countries themselves and the exploration of ways and means of strengthening self-evaluation capacities. The growing interest in, and demand for, monitoring and evaluation (M&E) systems reflects the impact of the 1990 seminar. Similar seminars have been organized in Asia and Latin America in recent years, and have also provided a rich source of lessons and ideas. The 1998 Abidjan seminar/workshop (November 16-19) was a follow-up of many of the points raised eight years before. It proposed: * To provide an overview of the status of evaluation capacity in Africa in the context of public sector reform and public expenditure management * To seek a consensus on the purpose, elements, and processes of M&E in support of development * To share lessons of experience about evaluation capacity development concepts, constraints, and approaches in Africa - To identify strategies and resources for building M& E supply and demand in African countries * To provide tools for developing a country M& E action plan * To provide country teams, representing 10-12 African coun- tries, the knowledge and skills needed to develop preliminary action plans for M&E systems, based on each country's indi- vidual circumstances * To create country networks for follow-on work. The seminar brought together senior participants from Burkina Faso, C6te d'Ivoire, Ethiopia, Ghana, Guinea, Malawi, Morrocco, Executive Summary xi Mozambique, South Africa, Tanzania, Uganda, Zimbabwe, and a large number of representatives of development assistance agencies, including the ADB, the World Bank, OECD-DAC, the United Nations Development Programme (UNDP), the British Department for International Development, DANIDA (Denmark), DIS-Norway, the French Ministere des Affaires etrangeres, the Inter-American Development Bank, the Swedish International Cooperation Develop- ment Agency and the Swiss Agency for Development and Cooperation. The agenda comprised formal presentations by experts in M&E that explored general themes and specific experiences by donor agencies. National approaches to evaluation systems in African administra- tions were described by participants, who then were invited to compare experiences and discuss the building of a national M&E system. Parallel sessions were held around specific themes (re- sources, partnerships, participation, and feedback), all pertaining to strategies and resources for building evaluation supply and demand. In the last two days, workshops were organized to help participants develop their own evaluation capacity development (ECD) action plans and to clarify concepts used in M&E activities. Discussions during the seminar underlined important directions in African administration and aid agencies. First, there is a global trend toward more accountable, responsive, and efficient government. Second, the role of evaluation within individual development assis- tance agencies is gaining in clarity and effectiveness. Third, the outlook for development partnership across the development community is brighter than it has ever been. With the spread of results-based management there is a growing demand for ECD programs that would help support planned public sector reforms, including more efficient and transparent public expenditure (budget management) systems. The diversity of experiences presented in the papers submitted by the participants from the 12 African countries present at the workshop and by the representatives of funding agencies made it xii Executive Summary clear that there was no standard approach to ECD. Significant factors influenced the adoption, organization, and dissemination of evalua- tion, including: * Differing administrative cultures * The degree and depth of determination to adopt evaluation methods * The level of technical knowledge in the field * The weak demand for evaluation in many countries. In light of these factors, it was widely suggested that a customized approach, based on a diagnosis of the specific set of factors in each case, was the most promising avenue of action. During the November 18 sessions of the seminar, organizers sug- gested that the country groups translate the discussions of the previous days into proposed action plans for their countries. This exercise aimed at grounding ECD options or proposals in national realities by: * Having country delegates undertake a quick diagnosis of conditions in their country * Framing a strategy to develop evaluation capacity and improve M&E activities. In their diagnoses, the country groups analyzed situations of weak (or very weak) demand for, and supply of, M&E activities. Recurrent themes of these diagnoses can be identified: * Many groups felt that the legislative or administrative frame- works were not suited to the needs of M&E activities (Guinea, Morrocco, and Uganda). Existing frameworks did not facilitate its expansion in new sectors, nor did they offer any encourage- ment or adequate protection to civil servants (Malawi). * Although some countries have built a core evaluation sector, weak coordination between ministries (Ghana, Guinea, and Executive Summary xiii Mozambique) has hampered its development and the building of a credible accountability system. * Most groups noted that training had been the major constraint to the emergence of a reliable supply of local expertise (for example, Zimbabwe, Ethiopia, and Malawi). * Without proper budget allocations, M&E becomes an empty shell. The country groups, although not mandated by their respective governments to design an action plan, were nevertheless asked to identify the initiatives that appeared most feasible for their coun- tries. Box 1 captures the recurrent themes of these plans. These proposed action plans isolated imnportant themes: * Institutional support both inside and outside Africa for ECD is seen as crucial. Whether it be improved legislation, a new set of administrative guidelines for evaluation, or increased awareness of these problems by national governments, donors, or interna- tional organizations, every avenue must be used. It may well be that there is a need for action plans devised and programmed by African governments, but that an "Action Plan for Donors" could also help sustain the momentum. * Training support in Africa (in M&E or in evaluation concepts, methods, and practices) appeared to be an important element of any ECD program. * Databases were suggested: (1) one that would list evaluators (practitioners, consultants, government officials in charge of M&E, auditing boards, private sector firms), which could be the first step toward the creation of an African Evaluation Society, and (2) another that would collect and make accessible lessons learned and best practices in M&E operations. The future of ECD may be linked to two central factors that are external to African administrations: the rise of a civil society, reflecting the voices of citizens, and donor support in partnership with govern- ments. More responsive, responsible, and transparent governments xiv Executive Summary are hein- sught by groups both inside and outside legislaviVe bodies. A cohereni and coordinated response by donors TaS viewea/, as a pledge that ECD woulul be part of Africas future. Box 1: Summary of initiatives presented in the proposed action plans List of major initiatives Number of action plans in the proposed action plans including the initiative Institutional reforms * Adoption and application of appropriate legislation associated with production of manuals, guides, and other tools 6 * Building of consensus, awareness, and acceptance for M&E activities in key decision centers 8 * Transparency rules applied to evaluations enhancing accountability 1 Human resources development * Training of trainers, officials, and technicians involved in M&E 10 * Participation in joint evaluations with external funding agencies I * Creation of a network of evaluators to facilitate exchanges 3 Resources management * Allocationi of resources for M&F. 4 C Creation of a database of information from M&E operations, with proper management tools to disseminate best practices 9 Executive Summary xv Part 1: Perspectives on Monitoring and Evaluation in Africa I'- 1 Welcome Message Ahmed Bahgat, ADB Vice-President Distinguished Participants, Ladies, and Gentlemen: Let me extend to you all a very hearty welcome to this wonderful city of Abidjan and to the African Development Bank. I wish you all a very pleasant and comfortable stay while you are attending this seminar. Africa is waiting on the threshold of socioeconomic development, requiring massive inputs of resources. But resources are becoming scarcer, and member governments and donor agencies are becoming increasingly aware of the need to ensure the most efficient utiliza- tion of the resources available. Accountability for resource use has become indispensable. It is in this context that evaluation, which facilitates the assessment of the effectiveness and impact of the investments in achieving set objectives and goals, is now recognized as a fundamental and critical management tool. The Bank Group has been conscious of the need to develop evalua- tion capacity in the member countries and had organized, in collabo- ration with the Development Assistance Committee (DAC) of the Organization for Economic Cooperation and Development (OECD), the first Regional Seminar on Evaluation Capacity Building in Abidjan in 1990. The present Regional Seminar and Workshop on Monitoring & Evaluation Capacity Development in Africa is a follow- up of the earlier seminar, and aims to support the efforts of inter- ested African countries to improve their capabilities to review the effectiveness of their development activities, more particularly in the context of public sector reform and public expenditure management. I am extremely delighted to note the high level of participation both from member countries and donor agencies. That you have chosen to spare time from your busy and punishing schedules to participate in this seminar is itself indicative of the importance your countries and 3 institutions attach to this seminar This is a good augury. The continued commitment of member governments is an essential condition for the seminar to achieve its long-term objectives. Extensive participation by the donor agencies will ensure that the quality and level of deliberations will be both high and realistic. You have a heavy agenda ahead of you. The span of coverage is intensive but relevant. I note from the agenda that the seminar will take an overview of the status of evaluation capability in Africa; share lessons of experience about evaluation capacity development; seek consensus on the purpose, elements, and processes of monitor- ing and evaluation (M&E) in support of development; and identify strategies and resources for building M&E supply and demand in African countries. It is also scheduled to consider the possible tools for country M&E action plans; to provide country teams with sufficient knowledge and skills to develop preliminary action plans for M&E systems uniquely suited to the country; and to create networks for follow-on work. The seminar will also look at issues such as building an M&E system for improved public sector and expenditure management, and finally will organize an evaluation training workshop. Distinguished participants: I will now share some thoughts with you. At this time, few member countries have institutionalized evaluation systems. Until now, evaluation has been largely a donor-driven exercise, and evaluations were largely confined to donor-assisted projects and programs. However, a much larger proportion of the development effort is the countries' own. The effect of this seminar should be to take the countries on a path on which they can develop their own capabilities for assessing the impact of all development interventions, including their own. It also needs to be recognized that the standard of institutional capacities in member countries vary considerably, and it is therefore important that the M&E capacity development plans not lose sight of the realities of the institutional and staff resources 4 Ahmed Bahgat, ADB Vice-President that can reasonably be harnessed for this purpose in member countries. Some initiatives in the past have tended to remain limited to the monitoring of implementation rather than the sustainability and assessment of the development impact of the interventions. Care needs to be taken that all the aspects of M&E are developed to permit the fullest use of the system as an effective management tool. Similarly, development of M&E capability would be incomplete without linkages between M&E systems and the planning apparatus. It will be important to develop appropriate and strong feedback systems, with efficient storage and retrieval facilities, as links between the M&E and planning machineries. Friends: Africa is particularly vulnerable to, and has to cope with, the vagaries of nature, varying international commodity prices, and volatile global economic and trade situations. With these uncertainties, constant monitoring and evaluation becomes ever more important to ensure the desired impact and sustainability and to lead Africa to its chosen path of socioeconomic growth and the increased well-being of its inhabitants. I note that the seminar has been organized in a manner that allows maximum participation of the delegates. Parallel sessions of small groups will facilitate closer interaction between participants and more fruitful discussions. I am sure this will help you, all the delegates, to draw up a workable Action Plan. I wish the seminar great success. Welcome Message 5 2 Opening Statement His Excellency, Tidjane Thiam Ladies and Gentlemen Representing Bilateral and Multilateral Development Financial Institutions, Ladies and Gentlemen: Permit me, first, on behalf of the Ivorian government, to cordially welcome the participants to the Regional Seminar on Monitoring and Evaluation Capacity Development in Africa. The theme chosen for this seminar is of primary importance to our countries. Indeed, it is essential for us to have human resources and tools to enable us design, monitor, and evaluate development actions. This requires ensuring the efficient implementation of programs and projects, while avoiding mistakes and targeting the maximum output for allocated financing. The quality of monitoring and evaluation is a major factor in the ongoing reform efforts in many African countries; there are three main reasons for this. First, there is a need to make sure that the efforts undertaken in a concerted manner in each of our countries, in the private sector and civil society, are targeted toward one goal: efficient and visible improve- ment, on the ground, of the conditions of life for Africans. The effectiveness and efficiency we are looking for can only be achieved through the rigorous and methodical monitoring and evaluation of actions taken. Second, as you know, our countries receive significant assistance from multilateral and bilateral donors. Our ability to utilize these resources effectively is a legitimate condition for us to receive additional resources. For our partners in development, as well as private investors, the quality of implementation of projects has become a performance criterion for classifying countries. In the 7 context of limited public funds, such funds will continue to increase for countries that utilize them more effectively. The return of economic growth in many of our countries, instead of leading to laxity, should encourage us to show increased rigor and efficiency. Third, the increasing democratization of our societies has made transparency a priority. Democracy also means rendering an account to the citizen taxpayer. Monitoring and quality evaluation are therefore necessary if the State is to prove to its citizens that it uses financial resources efficiently. On behalf of the Ivorian government, I congratulate and encourage the African Development Bank in its firm commitment to play an active role, side-by-side with other multilateral and bilateral donors, in strengthening human and institutional capacities in Africa. The Ivorian government, on its part, has undertaken a significant redynamization of the private sector and a strengthening of develop- ment program and project management capacities, in a framework that enhances sustainable development, while ensuring the strength- ening of national capacities. To achieve this, the Ministry of Development Planning and Program- ming, for which I am responsible, pays particular attention to monitoring and evaluation activities, in view of the light thrown by clear and precise analyses on the conduct of our activities. The recent creation of this ministerial department shows a recogni- tion of the importance of the planning, programming, budgeting, monitoring, and evaluation cycle by the government. This is the major focus of this ministry. Cote d'Ivoire has significant experience in matters concerning projects and implementation. Our guidelines are: 8 His Excellency Tidjane Thiam * Implementation decentralization. We seek to give maximum responsibility to project implementation structures by offering clear and precise objectives. * On-the-ground, close monitoring, with a specific and independent structure, by le Bureau National d'Etudes Techniques et de Developpement (National Bureau for Technical Studies and Development) (BNETD). Our first concern is to systematically implement a well-designed set of precise monitoring indicators, determine easily identifiable performance criteria, and recommend the timely production of analytical evaluation reports. With regard to your seminar, I note with satisfaction that one of the results expected concerns the establishment of information and exchange networks on African methods and experience in matters of program monitoring and evaluation. For our part, we are glad to match our experiences with those of the African countries represented here, as well as with those accumu- lated by all our partners in development. This is a particularly laudable initiative because for most of the topics we are working on, there are African solutions, which have yielded good results. Too often, the very existence of these solutions is ignored. To know how to compare the best strategies, carry out benchmarking, and transfer know-how is also a form of development. The list of participating countries-South Africa, Burkina Faso, C6te d'Ivoire, Ethiopia, Ghana, Guinea, Malawi, Morocco, Mozambique, Uganda, Tanzania, and Zimbabwe-is such that I am convinced of the high-level nature of the discussions you are going to have. I am the most unlucky person here, because, unfortunately, I am not going to participate. Opening Statement 9 Ladies and Gentlemen: This second regional seminar on capacity development, organized eight years after the one held in 1990, should help to establish a precise balance sheet of progress made and to identify what is left to be done, and the steps to be taken to get there. There is no doubt that the results of our work will be enriching and will help to improve the preparation of development plans and programs. Indeed, it is from the lessons learned and the recommen- dations drawn from your program performance evaluations that governments can redirect, modify, and redefine their actions to attain the objectives. Meetings such as yours should lead to concrete action plans and recommendations, directly usable by decisionmakers. I hope that the three objectives you have set for yourselves-designing prelimi- nary action plans, establishing contact networks, adopting of a coherent approach by the agencies of assistance to development- will be achieved. I will express only one wish: that the results of your seminar will themselves undergo a rigorous evaluation and monitoring. Wishing you a fruitful deliberation on Ivorian soil, I declare open the Regional Seminar and Workshop on Monitoring and Evaluation Capacity Development in Africa. Thank you. 10 His Excellency Tidjane Thiam 3 The Goals of the Workshop Gabriel M. B. Kariisa It gives me great pleasure to welcome you all to the African Develop- ment Bank and to this Regional Seminar and Workshop on Monitor- ing and Evaluation in Africa. On behalf of the African Development Bank (and on my own behalf), I extend a warm and hearty welcome to all the participants. Our regional member countries are represented by Burkina Faso, C6te d'Ivoire, Ethiopia, Ghana, Guinea, Malawi, Morocco, Mozambique, South Africa, Tanzania, Uganda, and Zimbabwe. I am particularly heartened by the high level of participation from member governments. I welcome all the distinguished participants from donor institutions and governments, many of whom have been instrumental in setting the background for this seminar and who will be making presentations. I welcome our own Bank staff from several departments and units of the Bank. I thank you all for making special efforts to come here and participate in the seminar. I am confident that with this strong participation, the deliberations of the seminar and the workshop will be meaningful and fruitful. The number of countries and donor agencies participating and the number of delegates present are a clear indication of the importance that member governments and lending agencies now attach to monitor- ing and evaluation (M&E). I referred to the high level of participa- tion. Evaluation culture and practice in government departments is not easily achieved and requires patient and sustained effort. The high level of interest is an extremely promising development, indicative of strong governmental and donor support and commit- ment to the development of M&E capability and capacity in develop- ing member countries. Supporting the task of the socioeconomic development of Africa is a massive and challenging undertaking. Resource needs are large, but 1 1 resources are becoming scarcer. The gap between availability of resources and demand for them is unlikely to narrow in the near future. So it is important to use available resources in the most efficient and cost-effective manner. The Bank (and indeed, the entire donor community, as well as the member governments) is increasingly being called upon to account for the resources put at its disposal. M&E is thus being viewed around the world as an indis- pensable and strategic tool for development management. Experience of development management in most countries, and most certainly in Africa, suggests that there are growing gaps between what is planned and what is implemented and achieved. Too often there is a large gap between the design goals and objectives and the actual outcomes and impact. There is also the uncertainty of sustainability of the benefits achieved. Monitoring and evaluation is a tool by which development managers can effectively uncover the causes of underperformance. M&E can also provide lessons for the future. It can help improve the overall efficiency in design, imple- mentation, and operation of development projects and programs. In the project cycle, evaluation based on earlier monitoring occupies an important place. It matters little that it comes as the last phase of the completed project: what is important is that the feedback is the critical input at the very first stage of the next project. M&E is thus a very important phase of the project cycle. Properly conducted, M&E can make a significant contribution to the improvement of project quality, encourage accountability, allow transparency in public sector investments, and strengthen all aspects of development management in a country. It can be an important tool for economic and other public sector reform. Even in the private sector, total quality man- agement (or TQM, as it is commonly called) and the strategic planning concepts in use are based on, and require, establishment of capability for M&E of performance in terms of results, feedback, and refinement of objectives and strategies. And yet, M&E is still scarcely used in most developing member countries. It is considered by most developing countries (and even 12 Gabriel M. B. Kariisa some of the developed nations) as a luxury, or a donor-driven fetish or appendage that has to be tolerated. Our success will largely depend on the extent to which we are able to erase that mindset in the member countries. Everything possible needs to be done to enhance the development impact of domestic and external resources expended for economic development and to address the social needs of the populations of member countries. There is considerable evidence that mistakes in concept, design, and implementation arrangements continue to be repeated at great cost to the economies. These could have been avoided if only findings and lessons from previous evaluations were available and used. Where evaluation results were used, perceptible results in enhanced development impact were noted. Seminars such as this are a means to sensitize the development partners-both the donors and recipients-to the importance of M&E. The first Regional Seminar on Performance Evaluation in Africa was organized jointly by the Bank and the Organization for Economic Cooperation and Development (OECD) here in Abidjan in 1990. Similar seminars were later held in other regions. This seminar follows-up on the first seminar, and is intended to take stock of the progress made in Africa since then. We also hope to benefit from developments elsewhere in M&E since the last seminar. The main objective of the seminar is to raise the awareness of decisionmakers in RMCs of the importance and benefits of evalua- tion. It will also help in broadening our understanding of the range of institutional capacities, strategies, models, systems, resources, and paths. A dialogue among donors, policymakers, planners, and implementors of governments in the region, together with the sharing of experiences, could lead to better understanding and productive directions for our future steps. Few member countries have institutionalized evaluation systems, and evaluations were largely confined to donor-assisted projects and programs. However, a much larger proportion of the development The Goals of the Workshop 1 3 effort is the countries' own. This seminar provides an opportunity to discuss the development of countries' own systems and to design evaluation capacity building (ECB) programs, or strengthen such capacity where it exists. In this regard, the seminar should be able to support the efforts of RMCs to improve their capabilities to review the effectiveness of their development interventions. The seminar aims to gather the experiences of M&E activities in selected RMCs and to consider options for the future. It should also assist donor agencies to develop plans to assist RMCs in this regard. It must serve as a first step toward the creation of regional network- ing that could expand the efforts, to establish an evaluation culture in Africa. I mentioned earlier that feedback is a critical input at the very first stage of the next project. I will dwell a little more on this. The usefulness and effectiveness of M&E systems depend almost entirely on the efficacy and effectiveness of the means and methods of feedback of lessons to the planning process in the design of new programs and projects. Growing competition for funds and the increasing complexity of development constraints are creating a demand to learn more from the past, so as to do better in the future. Development of evaluation capability would thus need to be accom- panied by development of a credible and effective feedback system to provide such linkages to the past. The design of the feedback arrangement would have to provide efficient storage and retrieval, as well as categorizing and collating systems such that the findings and lessons could be easily accessed by region, country, sector, subsector, and project-cycle steps. Performance evaluation capacity and feedback systems need to be built up within the nodal agencies-that is, the finance or planning ministries, the line ministries, and the larger implementing agencies in the public sector. Great care will have to be exercised to ensure that ECB programs and action plans are visionary but practical. 14 Gabriel M. B. Kariisa ECB is a long- term process requiring sustained commitment, effort, and resources. The continued commitment of government-and its practical manifestation in the provision of support and resources- will be crucial to any programs we are able to develop at this seminar. I am confident that our distinguished participants will be able to spread the message and sustain the necessary commitment in their respective capitals. We donors, too, have a responsibility to shoulder. Our commitment and support to the governments, particularly during the early stages of institution building, will considerably influence the pace and sustainability of ECB efforts. We have to explore various modalities for supporting ECB programs in Africa. The presence of a galaxy of both multilateral and bilateral donor agencies today and their support in this endeavor is cause for great hope and comfort. Some of the seminar discussions have been organized in small, parallel sessions. We hope that this will provide for greater interac- tion among participants and will lead to more purposeful discus- sions and conclusions. This will also facilitate the preparation of a practical action plan. The plan is expected to set out a comprehen- sive roadmap of activities, both short- and long-term, primarily for RMCs, but also for donors. Implementation and success of the plan will depend greatly on the commitment and resources provided by the member countries themselves and their partnership with the donors. Thank you all for joining this seminar. I wish you very fruitful and productive deliberations. The Goals of the Workshop 1 5 4 Introductory Statement Niels Dabeistein Mr. Chairman, Mr. Vice-President, Distinguished Participants: My very first visit to Abidjan was to attend the first Regional Confer- ence on Evaluation Capacity Building held here in 1990, so it is therefore a special pleasure to open this seminar, which, like the first, is cosponsored by the DAC Working Party on Aid Evaluation (WPAE) and the ADB, with able support from the World Bank and UNDP. The purpose of the first conference was to raise consciousness about evaluation as a management tool and as an accountability tool. I remember going home from that occasion a little disappointed- the key conclusion was that although evaluation institutions did exist in several African countries, most had little impact on policy and management decisions. The main reason identified was that there was little political and managerial demand for independent and transparent evaluation. Credible and useful evaluation is a function of good governance; that is, demand for effective public sector management and accountability leads to the existence of evaluation institutions and professional capacity. The focus of recent years on public sector reform, on public expen- ditures management, on democratic reforms, and on development effectiveness has, I hope, increased the demand for evaluation as a necessary and useful element of good governance. The emphasis at the first conference was on illustrating the usefulness of evaluation by examples from European countries. At this seminar we are taking a different, and perhaps more fruitful, approach: sharing experi- ences among African countries and between the DAC members and our African colleagues. It is a positive sign that more Africans are attending this seminar than non-Africans. 17 Although support for building evaluation capacity in developing countries has been on the agenda for several years, and the WPAE has promoted the concept actively through several seminars and through the members' direct assistance, I am sure we can do more. I and my colleagues in the donor community stand ready to support evaluation capacity in any way we can, but I know that unless there is a genuine wish to improve public sector management and transparency, any evaluation capacity developed will not be used. Thank you Mr. Chairman. 18 Niels Dabelstein 5 Evaluation Capacity Development: Lessons Learned by the UNDP Arild 0. Hauge Evaluation is a good public management practice A key feature of successful organizations, whether public or private, is their ability to learn from experience and to react to changing condi- tions-in the social, political, or institutional environment, or in the marketplace. In approaching evaluation capacity development (ECD), one must recognize that the importance of the evaluation function is not unique to the development cooperation sector. Evaluation is not a passing fad or a short-lived obsession of the development assistance agencies. Evaluation capacity is increasingly being acknowledged as a mecha- nism for providing accountability for public resource use and, simultaneously, as an instrument of organizational learning. Evalua- tion is a management function, a quality assurance mechanism, a learning process, and a possible marketing tool. Evaluation is necessary, although not necessarily sufficient, in all these respects. Introducing evaluation as a distinct function is one important option for a mechanism that can be a surrogate for the market mechanism and bottom-line profitability that signals success or failure in the private sector. Certainly, anyone wishing to search for common threads of contemporary theory or "best practice" in public sector management will be struck by the increasing promi- nence of the evaluation function-in one shape or another. Our basic premise is that good evaluation can help people, organizations, or nations to better reach their goals. Our proposition is, accord- ingly, that a capacity for evaluation is also a necessary component of developing countries' overall ability to manage their process of 19 development. M&E supports accountability, the assessment of the quality of decisionmaking, and rationality of resource allocation, and it can be an indispensable tool in achieving the development goals of a nation. Evaluation is one possible component of the complex set of mecha- nisms, processes, relationships, and institutions through which citizens and groups articulate their interests, exercise their rights and obligations, and mediate their differences. Evaluation can be one cog in a nation's basic system of checks and balances. We thus perceive ECD as a dimension of sound governance-the democratic exercise of political, economic, and administrative authority in the manage- ment of a nation's affairs. The purpose of evaluation is to improve decisionmaking. The outcome of evaluation is, first, a better understanding of what works, what doesn't, and why. But conducting evaluations and producing reports is not an end in its own right. It is when the findings of an evaluation are understood, discussed, and acted upon that it fulfills its function: when decisions are made about the future of the subject policy or program itself; when policymakers and managers under- stand its implications for other policies or programs; and when changes are actually being made in the process of resource prioritization and allocation. If evaluation is about enhancing the quality of public investment, then a need for the function exists at different levels of government: the level of overall national resource allocation and policy formula- tion; the level of individual sectors or technical ministries; the level of regions, municipalities, or local authorities; and the level of individual public organizations or projects. At the UNDP we tend to bracket monitoring with evaluation (M&E). It is our view that if monitoring cannot help in taking appropriate decisions, then evaluation by itself may be too late. While separate as concepts, they are both designed to further accountability, learning, and decisionmaking, and are thus fundamentally linked by purpose. 20 Arild 0. Hauge Good monitoring is an essential prerequisite for good evaluation. And monitoring has a built-in aspect of self-evaluation. Evaluation is not a measurement exercise. Basic information on results from an activity should come from monitoring-and the role of evaluation should be to make judgments on whether those results have had the intended effects, have been achieved efficiently, and whether results are sustainable. Evaluation has traditionally been seen as a project-level, post- implementation activity. The emerging perception is of evaluation as an ongoing management and planning activity-rather than a one-off, post-implementation tag-on. With the global shift in emphasis of the public sector from project micromanagement to broader policy direction, we believe evaluation to have the greatest pay-off when applied to program and policy-level activities rather than to isolated, individual projects. UNDP experience with ECD Evaluation capacity development is a necessary component in UNDP's program of assistance to developing countries. UNDP's work in ECD has been mandated by the General Assembly and has been included among the key areas of work for our Evaluation Office since its establishment in 1983. At the last General Assembly, there were renewed calls for the UN to intensify its ECD efforts. UNDP's ECD efforts are, nevertheless, relatively modest. The ECD activities and experience of UNDP fall into four categories. Country-level projects: M&E is a component in many of the several hundred ongoing UNDP country-level projects that support program countries' national authorities in strengthening public sector administration and development of capacities for external assistance management. In the last decade, we have had approxi- mately 100 technical assistance projects aimed at capacity building that are specifically related to M&E. The primary purpose of most project-level ECD is to support the implementation of particular Evaluation Capacity Development: Lessons Learned by the UNDP 21 sectoral or thematic programs, usually funded by UNDP and other donors, rather than the development of generic, national capacities for evaluation. Assistance projects aimed at ECD generally appear to make use of traditional models of cooperation, such as provision of short- or long-term advisers, sponsorship of study tours, and installation of computers and other equipment. Monograph series on national M&E: Over the 1989-96 period, the Evaluation Office of UNDP conducted a series of regional and country-level studies on national M&E. A total of 18 countries were studied as part of this exercise, of which 6 were in Africa: Tanzania, Uganda, Morocco, Guinea, C6te d'Ivoire, and Zimbabwe. The monograph series was significant in that the studies were among the first undertaken in which M&E was looked at from the angle of national governance and development management, rather than from the point of view of donor needs. In addressing the wider domestic context for ECD, the monograph series contributed substantially to understanding the futility of dealing with ECD as an issue isolated from the overall policy and institutional environment of the developing country. Subregional workshops and training programs: UNDP has conducted a series of subregional workshops on M&E for UNDP staff and for counterpart governments; the first was held in 1995 in Addis Ababa, Ethiopia, with others having been held in Kuala Lumpur, Malaysia; Buenos Aires, Argentina; Prague, Czech Republic; and Tallin, Estonia. A key element of these programs has been devoted to a review of national experiences with M&E. The workshops also provided direct input into the revision of our own policies and guidance for the conduct of evaluations. A total of 40 countries have participated in these workshops, through delegations of senior government officials involved in domestic and external resource planning and management-such as Ministries of Finance, Foreign Affairs, Social Welfare, Economic Planning and Development, Transport, Health, and Education. 22 Arild 0. Hauge The Evaluation Office has provided in-house training to government officials from a number of countries, including the Republic of South Africa, Morocco, Guinea, and Madagascar. Multilateral cooperation and exchange of experience: UNDP has been an active participant in the discussions and exchanges of experience that have been undertaken by the OECD/DAC Working Party on Evaluation. Since its first meeting on ECD here in Abidjan, ECD has been one of the recurring topics of this DAC group. UNDP is the convenor of the UN Inter-Agency Working Group on Evalua- tion, where, again, ECD has featured as a prominent topic. On a bilateral level, the Evaluation Office of UNDP and the Operations Evaluation Department of the World Bank have established a Framework for Cooperation in Evaluation. The agreement focuses on collaboration in supporting evaluation capacity development of program country authorities. Like the development cooperation community at large, we have found a coherent and focussed approach to ECD elusive. We know more about what does not work than what does. We certainly have no "formula" or 'recipe" for successful ECD, but we have learned a number of important lessons, which we would like to share. ECD lessons learned by UNDP Although some countries in Africa have relatively robust monitoring functions, evaluation capacities remain, in most cases, at an incipient stage. All countries have some kind of system for monitoring domestic recurrent expenditures. Most countries also have some system for monitoring domestic capital expenditures. When it comes to evaluation, however, experiences are relatively limited and usually derived from involvement with exercises associated with donor organizations. From this experience there is often a percep- tion of evaluations being a nuisance (and not just because of the time involved). There is a feeling that the donors are coming to "check" on their counterparts' mistakes; that local institutions are Evaluation Capacity Development: Lessons Learned by the UNDP 23 relegated to being providers of information; that donors have the final say in what conclusions are reached about ostensibly coopera- tive activities, and that little regard is given to what the donors themselves could have done differently. Few developing countries have a comprehensive system for review- ing all development resources, an evaluation function that is central to public decisionmaking. For evaluation of domestically funded activities, quality of information and access to it is often poor, mechanisms for feedback into the decisionmaking process are weak, and a culture of accountability is not firmly in place. Most evalua- tions independently undertaken by developing countries are project- oriented, with very few countries having undertaken broader program-, thematic-, sector-, or policy-level evaluations. ECD needs a strategic or systems approach We define capacity as the ability of individuals, organizations, institutions, and societies to effectively, efficiently, and sustainably perform functions, solve problems, and set and achieve objectives. Successful ECD does need skills, staff, and logistical resources-but this is not enough. The existence of physical facilities or the develop- ment of technical skills does not lead to capacities if addressed in isolation from the managerial processes of an organization. More- over, individual organizations do not function in a vacuum-they operate within a wider set of values and systems, and depend on a complex and organic policy and institutional environment. Creating technical capacities for evaluation makes little sense if undertaken in isolation from the essential processes of national decisionmaking. Capacity is the power of something (a system, an organization, or a person)-individually or collectively-to perform or to produce. It is not a passive state reached at a certain time, or which cannot be improved. It is a continuing process of learning and change manage- ment. Capacities exist at different levels, and at each of these levels there can be several dimensions (see box 5.1). 24 Arild 0. Hauge Box 5.1: Levels and dimensions of capacity development Generic capacity Issues of particular development dimensions relevance to ECD Policy Governance values Standards of transparency environment Government/civil society and accountability partnership Linkage to planning, budgeting. and resource Popular participation in management process and procedures policymaking Role of central vs. line/technical agencies Public service culture and Role of national vs. regional/local incentives. administration. Institutional Undisputed legislative Scope of evaluation mandate: recurrent vs. infrastructure foundations capital expenditure, external loans, grants, TA Clarity of functional Definition of evaluation criteria responsibilities Responsibilities in setting of evaluation Coherence of formal "agenda"/choice of subject and selection of process linkages evaluators Systems and practices for Principles of independence vs. participation interaction/liaison between Relationship management practices: Authority institutions to uselresponsibilities for follow-up to Judicial fairness, conflict evaluation resolution mechanisms. Linkage to audit. Unit, individual Allocation of budget, staffing Establishment of evaluation approach: organization Management culture, systems, qualitative vs. quantitative techniques, practices composition of evaluation teams Skills pool, staff development Substantive/sectoral vs. functional/ Establishment and methodological organization of work development of procedures Formulation of evaluation plans: timing/ and methodologies periodicity, reporting Availability of physical, Availability of skills training- process administrative, and logistical facilitation, development analysis, accounting, support facilities. statistics, partnership building Access to international evaluation resources, networks, websites Facilities for data management, reporting. People Job descriptions, definition of Recognition of evaluation as a career track individual responsibilities Membership of professional evaluation Access to professional societies. development programs. Evaluation Capacity Development: Lessons Learned by the UNDP 25 Capacity development is, then, a concept broader than organizational development, because it requires an emphasis on the overall systems, environment, or context within which individuals, organizations, and societies operate and interact (and not simply the internal workings of a single organization). Parachuting evaluation trainers into a country, or sending any number of trainees, to overseas institutes, has little value if no one wants or asks for evaluations, or if it is unclear how evaluation fits into the decisionmaking process. Simply padding a capacity development program with donor money may give a short-term appearance of something happening, but it will not produce in- creased, sustainable capacities. Of particular relevance to ECD are the public policymaking and resource allocation processes and the prevailing public administra- tion and accountability mechanisms. A successful evaluation function requires clarity in its institutional and policy role; a legal mandate and independence; the formal linkages between evaluation and resource allocation; and a basis for the use of evaluation findings in the policy formulation process. There is no perfect advice as to what choices African countries should make in addressing these issues. Evaluation is an exercise aimed at injecting sound judgement into the decisionmaking process. The most general conclusion is that ECD approaches must be tailored to the individual and unique circumstances of each country. Dimension 1. Successful ECD requires domestic demand and political support for the function A fundamental challenge of ECD is the frequent lack of genuine demand for honest evaluation and feedback on the effectiveness of public actions. 26 Arild 0. Hauge ECD needs to be founded on a national perception of priority attached to the function. The existence of a demand for accountabil- ity, a genuine priority given to the validation of experiences gained, and a willingness to learn from lessons of the past are prerequisites for the relevance and effectiveness of any evaluation function. When there is no understanding of what evaluation represents, or where there is no truthful demand for actually knowing whether policies or programs work, there is little point in providing the service. The use of evaluation findings is more important than the identifica- tion of lessons for their own sake. Without sufficient political and cultural support for the function, the conduct of evaluation may be little more than a perfunctory exercise primarily aimed at institu- tional self-justification. The primary issue in addressing demand for evaluation relates to the question of what purpose the evaluation function as a whole is to serve. There are a number of possibilities: * Helping individual program managers to better manage, and improve stakeholder communication * Holding managers accountable for results produced or not pro- duced * Providing guidance to policymakers about policy options/priorities * Providing signals to planners in determining the relative merit of programs * Demonstrating achievements for external resource mobilization. This list is not exhaustive, nor are the options mutually exclusive. The point is that successful ECD needs a genuine foundation of purpose and clients. Demand is not a construct of conceptual logic, but a result of practical need-the existence of people who actually want to make use of an evaluation function for some rational purpose. The nature of demand essentially determines the scope of an evalua- tion function, particularly if its capacities are to be applied to the levels of policies, programs, or projects. Evaluation Capacity Development: Lessons Learned by the UNDP 27 A positive source of demand is more useful than a negative source. Evaluation demand can arise from a sense of crisis, such as when a fiscal crunch necessitates identification of areas for budget or staffing reductions. Another negative scenario in certain places has led to a demand for evaluation: when a new government wishes to purge itself from association with the misdeeds of a previous regime. However, our experience is that functions built in response to needs that are essentially negative or transient rarely prove to be effec- tive-they are certainly not sustainable. Numerous case studies demonstrate that the success of capacity development efforts has to be founded on forceful political support, which itself can be thought of as variety of demand. Morocco is one example of a developing country where the highest level of the executive has become personally involved in demanding and promoting the function. It has often been said that capacity develop- ment needs champions who personally take the initiative in advo- cacy, facilitation, coaching, or by other means. However, although individuals clearly matter, it is the totality of support that matters more: the overall momentum and consistency of demand for evaluation findings from high-level policymakers, middle-level technocrats, or popular and grassroots movements. Of course, having supporters or even champions at the highest level of influ- ence clearly helps. The optimal demand situation is where the practice of evaluation follows as a consequence of the incentives embedded in public service systems, where rewards and sanctions are guided by the produc- tion of results, and where managers collectively perceive self-interest in adopting the tools of continuous assessment and learning. Dimension 2. Inclusiveness, openness, and participation UNDP's mandate and programming objective is sustainable human development (SHD). The people-centeredness of the SHD concept entails a philosophical belief in participation-as a means, but also 28 Arild 0. Hauge as an end in its own right. Accordingly, UNDP also believes in an evaluation model that puts emphasis on participation and inclusive- ness. However, in addition to our perception of participation as a substantive developmental priority, we believe there is evidence that insisting on inclusiveness and openness is an imperative functional approach to ECD. Poor people should be entitled to a voice in determining whether a poverty program is a success. It is better to have an imperfect answer from someone who has a legitimate claim to concern than a perfect answer from an expert whose mind is elsewhere. Among program stakeholders, special concern is needed for the representa- tion of the ultimate intended beneficiaries. Some believe that the anticipation, planning, and conduct of evalua- tion is a process that is more important than the actual facts, conclusions, or recommendations unearthed. Certainly the process itself can help improve communication among program stakehold- ers, and thus can support building consensus on desirable outcomes and program strategies. Evaluation can help in building partnerships within government, and between government and civil society. Similarly, the process of evaluation planning and conduct needs transparency. Respect for the function is dependent on being able to understand what purpose evaluation is to serve; what criteria were applied in determining subjects of evaluation, in selecting evaluators, and in choosing the methodologies applied. Trust in the function is undermined when evaluation is conducted "inside a black box." Dimension 3. Clear roles and linkages, building on the unique national institutional environment The effectiveness of evaluation is dependent on not only political, but also legislative, backing for the national institutions involved. A major factor in establishing a system of evaluation responsibilities is the purpose and scope of evaluation activity. Evaluation Capacity Development: Lessons Learned by the UNDP 29 Some questions that need to be addressed in codifying evaluation responsibilities include: * Which kinds of decisions are to be guided by evaluation? * Who decides what is to be evaluated and who the evaluators should be? * Who, and on what basis, commissions or coordinates evalua- tions? * Who sets methodological guidelines or provides evaluation quality control? * Where does funding for evaluation come from? * What are the responsibilities and data access privileges of evaluators and clients? * Who is entitled to (and what principles should govern) access to evaluation findings? Clearly, the immediate tasks involved in ECD cannot be addressed in isolation from the broader organizational and institutional context of the function. Development of evaluation is dependent on an appreciation of how the function fits into the processes of national decisionmaking and the prevailing or planned system of institutional roles and responsibilities. Of particular relevance to evaluation is clarity with respect to the following: - The role of evaluation in executive decisionmaking * The role of evaluation in parliamentary or legislative delibera- tions - The role in national, sectoral, or regional/local planning - The linkage of evaluation with the budget system: appraisal, review, resource management, and accounting * The delineation of the responsibilities of evaluation vs. audit. Where should the overall responsibility for evaluation reside? Options include: the Office of the Prime Minister or President; Parliament or parliamentary committee; independent commissions; audit; Ministry of Finance, Planning, and/or Economic Development; 30 Arild 0. Hauge technical/line ministries; regional administration; and individual agencies, and departments. Irrespective of the choice of location for an evaluation function, its role and responsibilities in interacting with other institutions remain critical. If change processes are ongoing in overall civil service development or public sector reform, that is the appropriate context for addressing ECD. Of special concern to ECD is the need to build on existing systems and institutional realities and to use and strengthen existing capaci- ties, rather than starting from scratch. Letting a capacity evolve through expanding the mandate or strengthening the capacity of an institution that is already known, and that may already possess some relevant skills and experience, is more likely to yield success than introducing new entities that currently have no identity in the established institutional order and decisionmaking process. Dimension 4. ECD scope: evaluation should not be exclusively focused on aid evaluation The most powerful argument for evaluation capacity development is to make domestic and externallyfunded development investment more effective. Aid is only a relatively small proportion of develop- ment resources in all but a few developing countries. Evaluation is a function that can help in improving the management of all public resources, whether raised domestically, by external borrowing, or through grant-based technical assistance. It is critical that efforts to strengthen national evaluation capacities be founded on existing national mechanisms of domestic resource allocation and policy formulation, and not merely in respect to the individual or collective organizational needs of external cooperative partners. The establishment of institutional capacities dedicated to donor M&E requirements entails the operation of a parallel civil service system Evaluation Capacity Development: Lessons Learned by the UNDP 31 and undermines the sustainability of local capacities. The integra- tion of capacities for evaluating domestic and external resources is ultimately related to the ownership and sustainability of the develop- ment management process. The most important reason that evaluation capacity, as it applies to aid evaluation, may benefit developing countries is that it can help reconcile external and domestic development priorities by giving a true picture of the relevance, results, and ultimate sustainability of development cooperation efforts. A local capacity for evaluation can therefore help improve the effectiveness of external assistance. Activities closely monitored and evaluated by those who are involved yield greater results. A second reason that evaluation capacities may benefit developing countries is that this may help them in making more aid available. If local evaluation capacities can contribute to better demonstrating development cooperation achievements, this may also help attract more resources. There is clearly a pressure on donors to channel their assistance to development activities that demonstrate tangible results. Finally, when developing countries are dealing with donors, their immediate problem is that donors all seem to have different policies and procedures about evaluation; relating to these takes a lot of time and resources. In contrast, if recipient countries were to take evaluation initiatives, offer good evaluators, and use evaluation in their decisionmaking, UNDP might perhaps impose fewer of its own evaluation requirements. If local capacities were generally credible, donors would, at the margin, maybe relax some of their require- ments. The task of administering aid could, from the view of the recipient, be reduced. Dimension 5. Avoidance of methodological dogma An evaluation system should not be conceived with a predetermined methodological approach. By approaching ECD with a methodologi- 32 Arild 0. Hauge cal recipe, one may derail the potential emergence of alternative approaches that could ultimately prove more relevant and realistic. Within professional evaluation circles there is a never-ending debate about methodology. There is no unequivocal conclusion from their debates about the importance of independent vs. internal evaluation; the relative weighting of accountability vs. learning as evaluation objectives; the merits of quantitative vs. qualitative data collection or quasi-experimental vs. case study research design; or the definition of specific evaluation criteria. All methodologies have advocates and opponents; no single methodology can give answers that are both qualitatively rich and quantitatively precise. Although the best research result may be found in a triangulation of methodologies, depth of research must be considered against practical viability. Given local circumstances, however, there may be generic approaches that can get a country on the right track. A major principle is that any viable methodological approach needs to represent a useful balance between what is ideal and what is possible. Cost-effectiveness is a particular concern. In technique, the establishment of terminological consistency, particu- larly the clarity of terms expressing essential concepts, is often more important than the refinement of more detailed and individual terms. Elements of future UNDP support for ECD Decisions about the existence and shape of UNDP support to ECD in individual countries are necessarily a consideration of resident representatives and their government counterparts. ECD efforts are funded from resources allocated within the respective Country Cooperation Framework, and they need to dovetail with the other activities of the overall program of assistance to the country. Beyond the technical advice and support that we can offer our country offices, the assistance that can be provided by our headquar- Evaluation Capacity Development: Lessons Learned by the UNDP 33 ters Evaluation Office is limited. However, we do wish to augment the work done by our country offices. Advocating evaluation as a component of democratic governance and public sector reform in country cooperation discussions Where demand for evaluation is elusive, ECD activities should focus on advocacy that demonstrates the potential of evaluation to improve the relevance and effectiveness of public policy and programs. Proceed- ing with intense technical efforts is a waste of time. Stimulating a demand for evaluation is ultimately a necessary compo- nent of broader advocacy and assistance aimed at governance, adoption of participatory approaches to development, promoting decisionmakers' accountability for public resources, and results-orientation in the management of public affairs. This entails ECD being integrated with UNDP's wider agenda of gover- nance and for Evaluation Office development management, and mainstreaming ECD into our country program negotiations. How- ever, ensuring the adoption of evaluation as part of our agenda is a challenge for internal advocacy. Slow emergence of demand and ownership is likely to remain the binding constraint in most countries. Program countries that have received support for ECD in recent years indicate that such assis- tance has helped create awareness of the value of evaluation among policy managers. Where demand is the main problem, interventions should be targeted at that demand. Advocacy must then precede assistance aimed at more specific and technical capacities. The most general strategy for helping to create a demand for evaluation is to demonstrate the practical utility of evaluation to decisionmakers; how evaluations represent a possibility for gauging managerial performance; how evaluation findings may contain lessons that are valid for other activities; and how evaluations may form an input to resource allocation and policy formulation. 34 Arild 0. Hauge Studies on policy and institutional dynamics of evaluation The starting point for support to the ECD efforts of individual program countries is a careful assessment of the demand for and commitment to the evaluation function. Where demand exists and enjoys political support, the provision of assistance for the develop- ment of technical skills needs to be accompanied by efforts aimed at other aspects of the broader policy and institutional environment. In addition to adhering to the imperative of conducting thorough institutional analysis before extending technical assistance, we would like to resume our earlier practice of surveying the M&E experiences of individual developing countries. Although our earlier M&E monographs may not have fundamentally changed the nature of ECD efforts in the countries surveyed, we believe that the effort helped shed light on some critical dimensions of the issues. It is perhaps ironic that the monograph series may, in the short run, have contributed more to the general perceptions of ECD than to the surveyed countries themselves. Facilitating access to international professional networks Among the main conclusions of the subregional workshops is the need for networking among the evaluation professionals of developing countries. Evaluation capacities need to evolve from exchanges of experiences among the countries that have (or wish to establish) such capacities, not merely from interaction with the donor community. The Evaluation Office has established an Internet website for evaluation, which includes links to a wide range of evaluation resources. We have recently updated the Directory of Central Evaluation Authorities, which includes data on 134 program countries, compiled on behalf of OECDIDAC. The Directory includes names, addresses, and descriptions of functions and responsibilities of central government institutions and officials involved in the monitoring and examina- Evaluation Capacity Development: Lessons Learned by the UNDP 35 tion of the efficiency and effectiveness of governmental and donor- funded development activities. There is clearly a need to establish professional organizations of regional program country evaluation professionals. The professional evaluation associations that do exist are based on the expertise and membership of professionals almost exclusively aligned to the donor community. UNDP would like to support any initiative aimed at the formation of regional or subregional professional organizations or networks. UNDP, however, wishes to await initiatives from the countries themselves. Use of program country evaluators Some contribution to program country capacities is made through the participation of nationals in evaluation exercises. The practice of using external or independent experts, frequently from developed countries, as evaluators may counteract the learning potential of the function and may hamper capacity development. The use of foreign experts as evaluators may offer a greater element of independence and objectivity to the findings and recommendations of evaluation. In many cases, however, it has resulted in the function being associated with serving the accountability requirements of donors. The need for objectivity must be balanced against the learning potential that emanates from local or regional skills retention that can flow from a greater use of local expertise. UNDP is therefore seeking to promote the use of program country experts as evaluators, in both project- and program-level evaluation exercises, and intends to expand the pool of such experts in the central roster of evaluation experts. Donor coordination and cooperation Finally, ECD needs to be pursued in conjunction with procedural harmonization within the UN system and the development coopera- tion community at large. We support the DAC recommendations that donors should adopt a collaborative approach to achieve 36 Arild 0. Hauge consensus with concepts and methodologies, and should consider channeling ECD resources through a unified structure to avoid duplication and achieve economies of scale. Evaluation Capacity Development: Lessons Learned by the UNDP 37 6 Evaluation Capacity Development: Issues and Challenges Robert Picciotto This workshop on evaluation capacity development (ECD) symbol- izes the new development momentum of Africa, the strengthened focus on results of development cooperation, and the emphasis on partnership within the development system. Partnerships are not ends in themselves. They are but means to poverty reduction. And in this context, the construction of evalua- tion capacity in developing countries is a self-evident priority. According to Michael Scriven, "evaluation is a key tool in the service of justice . . . it directs effort where it is most needed" Why is the profile of ECD rising? Simply because markets, govern- ments, and the public at large are demanding more effectiveness and responsiveness from the development system. Evaluation capacity, defined as the ability of public institutions to manage information, assess program performance, and respond flexibly to new demands, is becoming a vital instrument of development. At the outset, let me note a basic paradox. Considerable effort by scores of development assistance agencies over several decades has produced little in the way of evaluation capacity. The central objective of ECD-the creation of resilient evaluation organizations in developing countries that are capable of playing a significant role in resource allocation and policy formation-has proven elusive. What lies behind this lackluster performance? Three formidable sets of obstacles come to mind. The first is on the demand side. Until recently, few developing countries evinced strong commitment to the evaluation function. To be sure, several develop- ing countries have created evaluation units attached to planning, 39 finance, or prime ministers' offices. But an enabling evaluation culture-involving the consistent use of feedback in formulating policies and managing public expenditures-has only been incipi- ent. Fear of political fallout and of public criticism have been inhibiting factors. Shortage of trained staff has been a constraint. But the major obstacle on the demand side has been the weak performance orientation of the public sector. In turn, the limited focus on results is linked to ineffective governance mechanisms. Accountability, participation, transparency, and the rule of law are still sorely lacking in most public sectors of the developing world. Uncertainties as to the relative roles of auditors general, planning agencies, finance ministries, the legislative and judiciary branches, and the prime ministers' and presidents' offices add to the confusion. Within constrained resources, priority has been given to the im- provement of budgets, auditing, and accounting systems. So demand constraints help explain why evaluation has been the Cinderella of public sector reform. The second set of constraints lies on the supply side. Bluntly put, development assistance agencies have failed to get their act together. ECD initiatives have been fragmented, uncoordinated, and supply- driven. Most have been geared to individual projects, without the benefit of a coherent countrywide public sector reform strategy. Promotion of workshops and expert missions has rarely been matched by systematic managerial follow-up or capacity development. The third set of difficulties relates to the loose match between demand and supply for ECD assistance. Harmonization in ECD strategies has been elusive across the development community. Cooperation across agencies has been weak. And ECD support has succumbed to the same dysfunctions that have afflicted aid coordi- nation and technical assistance delivery mechanisms. It has been supply-driven, expatriate-centered, and poorly adapted to the circumstances of individual countries. 40 Robert Picciotto Thus, we face very complex challenges. Yet the prospects for ECD are more favorable today than they have ever been, for three main reasons. First, there is a global trend toward more accountable, responsive, and efficient government. Second, the role of evaluation within individual development assistance agencies is gaining in clarity and effectiveness. And third, the outlook for development partnership across the development community is brighter than it has ever been. We have learned a great deal about ECD over the past decade. We know that evaluation produces value only if adequately connected to public sector reform and designed to be complementary to im- proved governance. The financial crisis in East Asia and other parts of the developing world is a brutal reminder that, in an increasingly integrated global economy, policy convergence and sound macro fundamentals are not enough. Institutions are equally important. The quality of public expenditures is as significant as fiscal balance. Large-scale, low-return projects (which generally flourish where evaluation is weak) tend to aggravate financial crises. State and corporate governance issues have come center stage and are shaping market perceptions. All this is generating considerable momentum behind public sector reform initiatives. Substantial experience with the new public management in devel- oped countries has been gained. From Australia to Canada, and from the United States to New Zealand, evaluation is contributing to the revitalization of good and responsive government, both at the federal and local levels. No standard model exists for export to the develop- ing countries and the transition economies. But a few key principles, forged through experiments in a wide range of circumstances, have emerged, and these should inform our ECD efforts. First, with respect to the public sector, results-based management (RBM) in public service delivery should become the standard. This, in turn, involves setting up appropriate accountability systems, geared to ensuring appropriate checks and balances within the Evaluation Capacity Development: Issues and Challenges 41 government hierarchy. RBM also means a focus on government policies and practices, especially those governing civil service management and public expenditures administration-particularly on the rules of the game that govern resource allocation, oversight, and quality management in service delivery. Second, with respect to the market, the new public management favors the set-up of external mechanisms to bring competitive forces to bear on public sector performance through contestability, devolution, and outsourcing. Third, with respect to the civil society, the new public management amplifies the voice of citizens about public policies and public service provision through direct feedback mechanisms and involve- ment in civil society organizations, public hearings, and other participatory mechanisms. The same principles have begun to be applied within developing countries. ECD experience in Costa Rica, China, and Chile, among others, suggests that the trend toward decentralization, deregulation, and debureaucratization in public sector reform has raised the demand for evaluation. Each country seeks to adapt the new public management principles to its own requirements and constraints. Rather than being seen as an imposition from the outside, evaluation is increasingly viewed as an instrument of good governance. As a result, the demand constraint is becoming less limiting. Another reason for hope has to do with positive changes within the development community. As an illustration, let me describe how the World Bank views evaluation in the Wolfensohn era. We are doing business according to four basic principles: (1) well-defined objec- tives-what we are trying to achieve; (2) a clear strategy-how we propose to get there; (3) monitorable indicators-to know if we are on track; and (4) evaluafion of results-both for accountability and for learning. 42 Robert Picciotto None of this is easy. In the development assistance business we are several steps removed from outcomes, let alone impacts. But this does not mean that we can shy away from defining the results we expect out of the public and private funds that have been put at our disposal. It does mean encouraging our member countries to associate civil society and the private sector in the design of country strategies. It also implies assisting countries to keep score in a transparent fashion. It means moving beyond the adjustment paradigm to develop participatory scorecards to track the progress of structural reform and social development. Conceptually, this approach is similar to that used by the New York Police Department to reduce crime. Its first step was to recognize that crime reduction, not administrative effort (number of patrols dispatched), was the objective. Having agreed to this, the focus turned to developing a strategy. Performance was monitored regularly. And tough questions about this strategy were asked if the numbers did not point in the right direction. According to Tom Peters, "what gets measured gets done." Obviously, performance is easier to measure-and attribute-close to the performer. Hence, the traditional focus on inputs and outputs. But the bottom line is outcome and impacts. It is important to seek measures at all four tiers (as prescribed by the logical framework) because there is no solid theory linking them, and development through partnership requires shared responsibility toward jointly agreed results, as well as distinct accountabilities for each of the partners. This explains why the Country Assistance Strategy (CAS) is being renamed the Country Partnership Strategy (CPS), and also why, for accountability purposes, we are combining a scorecard that focuses on internal performance-quality at entry; portfolio management; timeliness; and budget efficiency-with an evaluation system that focuses on the country level, the sector level, and corporate perfor- mance. In parallel, we are connecting evaluation to knowledge Evaluation Capacity Development: issues and Challenges 43 management to achieve organizational learning. Similar develop- ments are taking place throughout the development community. The final reason for optimism about the prospects for ECD is that there is increased coherence in development priorities across the system and a thirst for inclusion and partnership. This workshop illustrates the trend. We have come together in Abidjan from the four corners of Africa-and from the four corners of the world. We have come to listen to one another, and to learn from one another. Development is all about results. To achieve results, we need a roadmap, a compass, and a way to track our progress. This implies a capacity on the ground to collect, verify, assess, and use performance information. Experience among all development assistance agencies converges on one lesson: without institutional capacity on the ground, monitoring and evaluation components of projects are mere paper exercises, of no value whatsoever. A~t the country level, the demand for policy and program evaluation capacities and skills is rising as country assistance strategies involve an increasingly wide range of partners. The time is past when development assistance agencies could neatly isolate their interven- tions and plant their flags on enclaves and islands of development. As the development consensus on priorities has solidified, and as efforts to move development assistance to the higher plane of country programs and sector policies have become successful, dlevelopment coalitions have begun to emerge. In this context, it is proving increasingly difficult to ensure coherence in development interventions through headquarters-based coordination mecha- nisms. The country is now at the center of the development partner- ship, and evaluation needs to be located as close to the ground as possible. In sum, evaluation capacity development means more than impart- ing evaluation skills. It means fitting evaluation structures, systems, 44 Robert Picciotto and processes to new public sector reform strategies. This, in turn, requires a clear vision of where country priorities lie and what role the development partnerships can play in promoting progress toward jointly agreed development goals. Let me conclude with an anecdote from Mr. Wolfensohn's recent session with chief operating officers from ten of the world's leading companies, as told by Mark Baird, our Vice President, Corporate Strategy: "In discussing how we measure performance, one captain of industry said that he saw himself as a speed skater who had only one well-specified measure of success (profit) and knew whether he had won or lost as soon as the race was over. However, he noted that we are more like figure skaters. We complete our performance and then depend on the judgment of a panel to determine how well we did. They can judge the technical quality of our jumps and will certainly note when we fall. But much of their rating will depend on the difficulty of the routine and the passion displayed in its execution' This is where we are today with evaluation. While our techniques are important, and we should never compromise our integrity, our work is less a science than an art. What we do know is that our basic purpose-tracking progress toward poverty reduction-is essential to the future survival and welfare of the planet. That is why this workshop is both so challenging and so important. Evaluation Capacity Development: Issues and Challenges 45 Part II: Experiences in Evaluation Capacity Development I 7 Evaluation Development in Morocco: Approach Through Training KaddourTahri and Abderrahmane Haouach Morocco, like a good number of developing countries, launched a vast investment program in the second half of the 1970s to accelerate the development of its economy. But the results obtained were not commensurate with the efforts made. To adequately analyze the reasons for this, a survey was conducted at the various ministries and parastatals to identify the management mechanisms of development projects. The results of the survey have shown that 39 percent of those interviewed had no single mecha- nism for studying and evaluating investment projects. And even where a mechanism existed, the capacity to prepare and implement projects still remained inadequate, because nearly 50 percent of the staff in these units were essentially carrying out administrative management duties. The most studied aspect of projects in those units almost exclusively concerned the technical preparation and the financial monitoring of projects. Financial/economic evaluation and post-evaluation were relatively less developed, especially at the level of infrastructure, in agriculture, and in the social sector. About 90 percent of the departments involved in the survey expressed the need for additional training in project management for staff con- cerned with carrying out project investment surveys. Faced with this situation, the government implemented a training program in the preparation, monitoring, and evaluation of projects to strengthen its planning and technical services in the various ministries and parastatals. 49 This training program, which was designed to strengthen the national capacity to analyze and manage projects involving all sectors of the national economy, was carried out in three phases: The first phase, between 1984 and 1987, focused on the overall design of the program and the training of trainers and national civil servants who would ensure necessary continuity in the overall sectors of the national economy. The major activities of this phase concerned the organization of sector seminars and the production of methodological manuals and guides concerning the preparation and analysis of development projects. Sensitization measures taken by senior officers were included in these training activities to enhance the transfer of knowledge and technical designs in the day-to-day running of the civil service. Thesecond phase, which spanned the period from 1988 to 1991, was dedicated to the organization of seminars on project monitoring and the writing of operational guides in project analysis and monitoring, following the training of managerial staff on project analysis. The third phase, from 1992 to 1996, was dedicated to sustaining the training in sector monitoring of projects and the organization of sector seminars on the retrospective evaluation of development projects and programs. During all these phases, the ministries and parastatals also organized training activities for other managerial staff in their own technical services and for those in charge of preparing programs and monitor- ing investment projects. The importance of the Program National de Formation en Analyze et Gestion des Projets (National Training Program in Project Analysis and Management, or PNAP) achievements-either at the training level in project preparation, management, and evaluation or at the 50 Kaddour Tahri and Abderrahmane Haouach level of surveys and the production of operational manuals and guides-fully justified the initiative taken to strengthen and institu- tionalize the program. Strengthening national capacity in the preparation, monitoring, and evaluation of investment projects Presentation of the PNAP program: Conscious of the priority to improve development management in the public sector, the govern- ment has encouraged all policies aimed at strengthening national capacity in the management and evaluation of investment projects and programs. Instrumental to this intent, the PNAP was launched in 1984 by the Ministry of Planning with the assistance of UNDP, the World Bank, and the Cooperation Fran;aise, focused on the following areas. Objectives: * Increase the knowledge and improve the professional compe- tence of managerial staff in strategic management, analysis, monitoring, and retrospective evaluation of projects. * Design and encourage the use of methods and techniques derived from the best approaches in use in specialized interna- tional organizations. * Provide the country with a core of specialized trainers in the management of development projects and programs. Strategy: Since its inception, PNAP has successfully carried out a set of activities, including the training of managerial staff from both the public sector and the parastatals; the production of manuals on the methods to be applied for selecting, monitoring, and evaluating projects; and the conduct of specific studies in investment project management. Evaluation Development in Morocco: Approach Through Training 51 Results: At the end of ten years of sustained effort, PNAP was able to record significant achievements, both qualitatively and quantitatively. Qualitative balance sheet: The participants view of the training was highly positive. This was observed across the various compo- nents taught: * In-service training of 750 managerial staff (50 trainers and 700 analysts and management controllers) * Surveys and production of operational manuals and guides: - Manuals of technical analyses; marketing; and financial, economic, and institutional analysis of projects - Monograph on retrospective evaluation in Morocco - Survey of the facilities for monitoring within the civil service - Survey of the facilities for retrospective evaluation in Morocco - Survey of the impact of PNAP training. Impact of the training The effects of the PNAP training on professional activities and on the personal well-being of staff have been highly appreciated by the trainees.! Ninety-four percent of the population studied considered the training useful in their professional lives. The positive effects of the training received can be seen in its correct application, as well as in the use of the available tools and methods in professional life. The everyday practical use of the techniques learned to complete the training assignments was considered satisfactory by 81 percent of the participants. The managers of the staff trained considered the impact of the training highly favorable or favorable in 93 percent of the cases. Fifty-six percent of the participants studied were administrative heads (service heads and above)-this has enabled 94 percent of the 52 Kaddour Tahri and Abderrahmane Haouach beneficiary institutions to introduce various improvements. Thus, the changes introduced, by order of importance, included: * Improvement of working documents: 39 percent * Design of new tools: 29 percent * Acquisition of new attributes: 15 percent * Creation of structures: 11 percent. The innovations introduced within the institutions of the beneficia- ries were favorably welcomed in the following proportions: * Very favorable: 25 percent * Favorable: 60 percent * Less favorable: 15 percent. Evaluation of the impact on the promotion of trainees: The favorable impact of the training is also confirmed by the individual promotion of trainees. Thus, this training program has helped to improve the professional relationship within the institution in 91 percent of the cases. The positive impact of the training on the promotion of beneficiaries' careers was recorded in 35 percent of the cases. This is a significant result, considering the rigidity that characterizes promotions in the civil service structure, on the one hand, and, on the other hand, that the trainees, before the training, were already occupying important positions. Furthermore, the objective of the training gave more emphasis to assisting the beneficiaries in performing their duties (programming, analyzing, monitoring, evaluating, controlling, and auditing) than on training administrative heads. From this point of view, 68 percent of the trainees thought that the training contributed greatly, or at least adequately, to the enhancement of their careers, while 32 percent considered its impact on professional enhancement inadequate. Evaluation Development in Morocco: Approach Through Training 53 Impact of evaluation activity on institutional development: The most significant impact of training activities and enterprise sensitization through PNAP has been the institutional development of evaluation activity. This impact was observed in the following: * A team of trainers versed in modern techniques of project management, monitoring, and evaluation was assembled. These trainers, working in the various departments of the civil service, are agents of development in their respective offices through the dissemination of new working techniques and methodology. * A Project Pre-Selection Interministerial Committee (CIPP) was established. * A network of managerial staff initiated into retrospective evaluation techniques and methodology was created. * The institutionalization of M&E was strengthened in different sectors. Institutionalization of the Centre National d'Evaluation des Programmes (CNEP) (National Center for Program Evaluation): The new organizational chart of the Ministry of Economic Projection and Planning explicitly strengthened the activities of programming, monitoring, and evaluating programs. PNAP became CNEP to expand and develop evaluation activity within the public sector. Emergence of retrospective evaluation sector units: A good number of ministries have reorganized their departments, or propose to do so. The salient point that deserves to be emphasized here is the strengthening of the institutionalization of M&E through the introduction of services, divisions, and directorates of varying nomendature: monitoring, monitoring-evaluation, or evaluation. This is the case in both horizontal and sectoral ministries (Economy and Finance, Foreign Affairs and Cooperation, Agriculture, National Education, Infrastructure, Health, and Housing). Moreover, the results of the survey on project environment and the need for training confirm this favorable development in M&E-they indicate that 90 percent of directorates and institutions responding have units in charge of M&E projects with the stature of directorates (20 54 Kaddour Tahri and Abderrahmane Haouach percent), divisions (28 percent), services (30 percent), and offices or other (22 percent). The remaining 10 percent of the institutions that have yet to form M&E units have affirmed their intention to create them. Some of these units have started to define policies and procedures, and some M&E tools have been identified and tried. Some forms, reports, and performance indicators have been put in place. Al- though a number of these activities have not been very successful, it is worth noting that there is a permanent dynamic process in place. The retrospective evaluation actions, or activities that been carried out and can be assimilated within them, are generally of good quality. At the highest strategic level, His Royal Highness has established monitoring and evaluation councils for the country's priority concerns: the Councils for Monitoring Social Development Strategy, for Monitoring Social Dialogue, for Boosting and Monitoring Investment, and for Monitoring Private Sector Development. The involvement of foreign services and decentralized institutions in M&E should also be mentioned. Larger powers (especially in the weekly M&E of sector projects and programs at the provincial and local levels) have recently been granted to Governors and Walis in provinces and prefectures (October 1993). Technical Committee (interministerial) provincial meetings have been extended to include local government chairmen. Surveys carried out by CNEP have shown that the most important lesson is the favorable trend resulting from the establishment of units in charge of project management in general, and M&E in particular. Thus, at the end of 1997, 90 percent of the institutions surveyed had M&E units, as against 86 percent in 1989 and 61 percent in 1984. A particular characteristic of these units deserves to be mentioned. The units currently in charge of M&E are specialized: 37 percent Evaluation Development in Morocco: Approach Through Training 55 exclusively handle M&E (against 28 percent in 1989), and 21 percent of these units are ad hoc, constituted during the implementation of projects (against 28 percent in 1989). This explains why only 42 percent are still multifunctional and carry out other administrative duties in addition to M&E (against 44 percent in 1989). This multi- functional nature is more common in the social sectors (57 percent). Present state of the evaluation system in Morocco For a better understanding of the present evaluation systems, the notion of retrospective evaluation should go beyond projects and programs. There is a need, however, to appreciate the presence of a retrospective evaluation process at the level of the management of State affairs. This procedure is characterized by the intervention of various actors at horizontal and vertical levels. Horizontal ministries: The Ministry of Economic Projection and Planning, does not, in real terms, carry out retrospective evaluation, although a part of its operations (especially the monitoring of programs and the preparation of budgets) concerns evaluation methodology. This department plays a significant role in the production of the socioeconomic information necessary for carrying out evaluation surveys. There is also a need to note the institutionalization of PNAP as CNEP, with the objective of organiz- ing and developing evaluation activities within the public service. The Ministry of Economy and Finance is not expected to carry out retrospective evaluation activities. In its directorates, however, activities that relate to retrospective evaluation are to be found. At the Directorate of General Inspection of Finance, one could see retrospectively the statement of public finances (revenue and expenditures). These are not retrospective evaluation activities, in the sense that these actions do not provide information regarding 56 Kaddour Tahri and Abderrahmane Haouach the status of achievement of objectives and the effects and impact expected. The Directorate of Public Establishments and Participation (DEPP) carries out post-control activities that could be fused into retrospec- tive evaluation, which has to do mainly with issues of financial management and organization. The Revenue Court has the most well-defined mandate in manage- ment control, in addition to accounting and financial control. It also has the responsibility for monitoring the implementation of finan- cial laws. Article 71 of the law relating to the Revenue Court states that the aim of the management control of the targeted institutions is to assess its quality, and eventually prepare suggestions on the means likely to improve its methods and increase its efficiency and output. The Court's responsibilities focus on all aspects of manage- ment. In this vein, the Court looks into the achievement of set objectives, the means utilized, the costs of goods and services procured, prices used, and the financial results. The Revenue Court therefore has the mandate to look into the financial results and make recommendations that bear on central factors that could prompt companies, in particular, to be conscious of their shortfalls and to rectify them. The Revenue Court does not, however, have the mandate to evaluate public investment projects and programs. Technical ministries: At the Ministry of Public Works and Agricul- ture, there is a level of analysis that probably represents the most advanced level of retrospective evaluation among the technical ministries. At the Ministries of National Education, Higher Education, Senior Staff Training and Scientific Research, and Health, the activities of recently created evaluation structures are mainly limited to financial moni- toring and accounting, which is generally coordinated and regulated. Despite the efforts made to develop the evaluation mechanism within the civil service, there is still a need to strengthen the national and Evaluation Development in Morocco: Approach Through Training 57 sector capacities through training activities and to design and enhance the use of appropriate methodologies and procedures in retrospective evaluation. Lessons from the Moroccan experience in the strengthening of evaluation capacities Strong Points: Managers and senior staff at all levels subscribed to the intro- duction of an M&E system as a management tool to improve project and program management in particular, and socioeco- nomic development in general. The most informal decisionmakers insist that the M&E system be integrated at a high level of the department directly involved in evaluation. Without demanding that the retrospective evaluation be entirely internal to the body to be evaluated, one may insist that it should not be monopolized by a police-like department because it would lose its appeal and interest as a modern technique to enhance efficiency, effectiveness, and responsibility. The obvious goodwill toward monitoring and prospective evaluation is the fruit of the sensitization efforts made by CNEP, including the organization of seminars with the backing of international bodies, especially the UNDP and the World Bank. The second strong point is the availability of managers and senior staff adequately initiated into techniques of M&E, especially through workshops and training seminars organized regularly by the former PNAP since its launching in 1984. This included 750 senior staff and managers, of whom 210 have benefited from long-course training in the form of a Dipl6me d'Enseignement Superieur Spe'cialise (DESS) in management and audit, thanks to a close collaboration with the Cooperation Francaise. 58 Kaddour Tahri and Abderrahmane Haouach * The third point concerns the strengthening and development of the institutionalization of M&E in various sectors, especially public works, agriculture, health, and education. Weak Points: * The lateness of the adoption of an appropriate regulation instituting and organizing retrospective evaluation in the civil service and the absence of a clear mandate for CNEP to utilize retrospective evaluation. The decision to carry out an evalua- tion is currently more the initiative of donors than of the national party. * The inadequacy in operating and using socioeconomic infor- mation through the implementation of development projects and programs, as well as inadequacy in its analysis and evaluation. The problem of information management is primarily at the level of sorting, its communication networks, and its dissemination. Infor- mation is often presented, in crude data form, without analysis or in-depth evaluation. Decisionmakers find themselves overwhelmed with a volume of data that contains useful details for those at the bottom, but becomes less useful higher in the hierarchy. * There is a lack of explicit procedures in matters of evaluation. Furthermore, there are still few management procedure manuals that are really akin to the Moroccan government. * National competence is underused. Little effort has been made by the donors to include the local party in the implementation of retrospective evaluations. However, the World Bank, UNDP, and the ADB have taken initiatives in this direction. * The central government underuses or uses poorly the available competence in the regions and in the local ministries. In the regions, senior staff have daily contact with development reality and know how to carry out informative analysis in their working environment. It is this expertise that a well-structured Evaluation Development in Morocco: Approach Through Training 59 retrospective evaluation system should seek to use, especially through sensitization, training, and association, to be followed by the taking-over of evaluation work. Prospects: Toward the establishment of an operational system of evaluation Considerable efforts have been made to strengthen national capaci- ties in M&E. Sensitization, training, and production of manuals and methodological guides have certainly had a positive impact on the development and institutionalization of the evaluation activity. But the present M&E system is still weak and requires more technical, financial, and institutional support. CNEP, with the backing of national and international partners, will spare no effort in sensitizing the economic and social actors to the importance of evaluation, disseminating evaluation methods, and participating in the prepara- tion and implementation of evaluation surveys in conjunction with the departments and other bodies concerned. The improvement and the development of a national system for evaluating projects and programs (CNEP could be the major orga- nizer) require the adoption and implementation of a set of measures revolving around the following major points: * Grant CNEP an adequate statute and a clear mandate for retrospective evaluation. * Make CNEP a central depository for evaluation surveys and a privileged partner of international institutions in M&E. * Strengthen the support and exchange mechanisms in order to ensure that M&E units benefit from the donor's expertise. * Strengthen the sector units of retrospective evaluation. * Establish an institutional framework of evaluation (both at the national and sectoral levels), comprising: - An interministerial commission of evaluation, with a secretariat provided by CNEP 60 Kaddour Tahri and Abderrahmane Haouach - A national evaluation ftnd to finance surveys and research in evaluation. The implementation of such a system will have the effect of capitali- zation of experiences and will ensure a better rationalization in the allocation and use of public resources for a more effective manage- ment of economic and social development initiatives. 'The information presented in this section comes from the survey conducted in 1997 by CNEP. Evaluation Development in Morocco: Approach Through Training 61 8 Monitoring and Evaluation: The Case for Zimbabwe Molly Mandinyenya Zimbabwe has embarked on a sustained and incremental process of developing M&E capacity. The process is being spearheaded by the National Economic Planning Commission (NEPC), which falls under the Office of the President and Cabinet. NEPC's main functions are to: * Formulate and coordinate development plans and policies * Prepare and manage government's capital budget * Monitor and evaluate policies, programs, and projects * Assist in the formulation of district and provincial plans that feed into national development plans. The primary drive to strengthen M&E was the need to put in place a mechanism to ensure effective implementation of development plans and policies such as Growth with Equity (1981); the Transi- tional National Development Plan (1982/83-1984/85); the First Five- Year National Development Plan 1986-90; and the Second Five-Year National Development Plan (1990-95). The 1980-90 decade was characterised by slow economic growth, high unemployment, and huge budget deficits. It was therefore imperative that government use the increasingly scarce resources efficiently and effectively- hence the need to strengthen M&E. The desire to ensure effective implementation of policies, programs, and projects was first ce- mented by the establishment of an M&E unit in the (then) National Planning Agency (NPA), which was housed in the (then) Ministry of Finance, Economic Planning, and Development in 1988. Since then, the M&E building process, although slow, has taken on a logical and systematic route: 63 * In 1991 the director of the NPA collaborated with a UNDP- funded external specialist to draw up the first M&E conceptual framework for Zimbabwe, encompassing implementation monitoring, project completion reporting, performance monitoring, and impact evaluation. The study noted that some level of implementation monitoring was going on in ministries such as Health, Agriculture, and Education, but the major drawback was the lack of a central agency to coordinate inputs from various players and use the information in central government decisionmaking and policy formulation. The report also noted the crucial role played in M&E of the various coordination bodies, such as the Public Accounts Committee. * In line with recommendations from the study that the M&E function should be housed in the highest office to give it political clout and muscle, government established the National Economic Planning Commission, located it in the Office of the President and Cabinet, and tasked it with, among other respon- sibilities, M&E policies, projects, and programs (in collabora- tion with line ministries and executing agencies). * In 1992 and 1994 there were exchange visits to the Operations Evaluation Department (OED) in the World Bank. This helped to build knowledge about best M&E practices. * In 1994, NEPC, with assistance from the World Bank, engaged international experts from the Netherlands Economic Institute (NEI) to prepare a project proposal for strengthening M&E capacity in Zimbabwe. A six-year (four-phase) project was designed. It was decided that the introduction of a government- wide capacity building program in M&E should only proceed after a pilot phase in which the institutional framework for M&E would be fully defined and understood. * Following the NEI recommendations, between 1996 and May 1998, the Bank financed a pilot project in M&E, which has indeed laid down the cornerstone for M&E development in Zimbabwe. 64 Molly Mandinyenya The pilot project on strengthening monitoring and evaluation capacity (1996-98) The objectives of the project were to: * Bring about improvements in planning * Establish standard practices and procedures for M&E * Sharpen skills and broaden knowledge of M&E techniques. The three pilot ministries selected for the project were Agriculture and Lands, Health and Child Welfare, and Local Government and National Housing. Major findings: * The M&E section has not been effective in influencing resource allocation, fostering efficiency and effectiveness in project and program management, or in influencing policy analysis and formulation. * Much of the evaluation done is donor-driven. M&E is not considered as part of the management process. There is a general lack of appreciation, demand, and use of M&E informa- tion, which has culminated in general complacency on the part of all concerned. While NEPC has always requested this information through the Public Sector Investment Program (PSIP) circular, no enforcement measures were put in place because there were no links between resource allocation and M&E. * There were no guidelines to direct operations at all levels, or to give common definitions of terms such as monitoring, evalua- tion programs, and projects. * Institutional position of M&E: The directive to set up planning units in all line ministries that would house the M&E function and to provide an operational link with NEPC was not followed through. Certain ministries later disbanded the units Monitoring and Evaluation: The Case for Zimbabwe 65 because of lack of clarity and the tilting of power from the operational departments and the planning units (Ministries of Health and Public Construction and National Housing). Cur- rently only about three line ministries-the Ministries of Education and Culture, Higher Education and Technology, and Agriculture and Lands-have M&E units. * No ministry except the former Ministry of Public Construction and National Housing has a budget for M&E. * Human resources for M&E: Very few people have been trained in M&E, and those that have, received it through on-the- job training. They lack the theoretical and conceptual ground- ing and framework crucial to anchoring the whole process. * There is no centralized management information system. Outputs: * A situation analysis study laid a strong basis for the next phase of the pilot study. * NEPC worked in collaboration with assigned staff from line ministries to develop M&E guideline manuals and guides for users in standardizing systems. The guidelines define roles and responsibilities of various players; provide a framework for the establishment of the database; and illustrate procedures for carrying out M&E and preparing relevant reports. The guide- lines provide the criteria for selecting projects and programs and define terms to establish a common understanding. * A series of training workshops was organized for officials from NEPC and pilot and other ministries. * A computerized database was set up to facilitate M&E. There are plans to expand the Local Area Network to a Wider Area Network once resources are available. Project achievements: The project did succeed in some measure in achieving its intended objectives. For example: 66 Molly Mandinyenya * The framework for the PSIP (and for M&E within that process) has been strengthened, and tools have been provided to help NEPC be more effective in this regard. * Procedures and practices have been improved and codified to a certain extent in the M&E guidelines. The guidelines prepared under the project will go a long way toward establishing practices and procedures for M&E and increasing the level of skills. * Dissemination of M&E techniques and best practices took place, and the core of staff with relevant skills and experience was increased in NEPC and in the pilot ministries. Numbers, however, were lower than had been expected. There was no training of trainers carried out because of budget constraints, and no test evaluations were carried out for the same reason. From expected project findings, a new system for M&E has been designed and is being introduced: The new system assumes that line ministries and departments are the main actors in M&E, while NEPC, as the apex body, coordinates and directs the process. NEPC has resolved to strengthen the link between M&E and PSIP, particularly with regard to resource allocation. These developments are positively changing line ministries' attitudes toward M&E. Ministries are now required to submit (together with their PSIP bids) their annual evaluation plans, and to ensure that these are included in the budget. Ministries have been warned that failure to comply may result in budgetary cuts or reversal of proposed alloca- tions. So far, the response has been overwhelming, and this will facilitate and assist in building the M&E database. Each ministry will be required to designate a focal point or unit for M&E; as a result, the function should cascade to all the lower levels. Awareness of the potential contribution of M&E to improved public investment planning and appraisal has increased, as has knowledge of M&E techniques in the public sector in general. One of the pilot ministries, the Ministry of Health and Child Welfare, is already Monitoring and Evaluation: The Case for Zimbabwe 67 planning to establish an M&E unit as part of its next three-year plan. This is a serious breakthrough, particularly for a ministry that was one of the pilots. The project has thus laid down a strong foundation for the development of an effective M&E system. Lessons learned from the pilot project have formed a critical base for the design of a proposal for phase 2 of the project. This phase (which should have started in June 1998) has been delayed because of outstanding policy positions that government was asked to first fullfil by the World Bank. The major thrust will be to consolidate gains from the first phase and replicate the program government- wide. The major activities will be to set up the institutional and legal structures that can support the system; train personnel in M&E, both in NEPC and the line ministries; further develop and expand the M&E database; and develop a PSIP manual. The demand for M&E information at the apex level, if consistent and directly linked to the decisionmaking process, should manifest itself as demand at the line-ministry level. Demand at the apex level should be propelled by political determination. Lessons learned from the pilot phase: A number of critical lessons on how to build demand for evaluation in the pilot ministries are outlined here: * There is a need to build strong feedback mechanisms between the apex M&E unit and the line ministries to foster demand. Agreed positions have to be followed-through to demonstrate commitment. * To buttress the demand for M&E, the forerunners or the pilots must be drawn from sector ministries that have a more devel- opmental approach to the process. Service ministries such as Public Construction have a fragmented approach, and hence cannot fully appreciate M&E from the holistic angle. * Awareness campaigns carried out through workshops and meetings with all relevant levels were an important contributory factor to increasing demand. The awareness should reflect the 68 Molly Mandinyenya benefits accrued by the line ministries. Hence the process must ensure that M&E is not viewed as merely a time-wasting exercise imposed by some external policing agency. Rather, rapport between NEPC and line ministries needs to be strengthened. There is a need to provide a budget for M&E activities. This will ensure the consistent and continuous supply of M&E information, which should ultimately generate the demand for M&E. Provision of a budget also elevates the function and accords it visibility that assists in generating demand. The budgetary allocations toward this function should be main- tained, regardless of financial constraints. In Zimbabwe, arbitrary cuts effected to finance other demands, considered more urgent, have been responsible for relegating this crucial function to obscurity. Ministries are quick to point out that the inability to carry out implementation monitoring or any other related function was brought about by the cuts (which may not necessarily be the case). Demand for M&E information can be cultivated by a continu- ous and well-conceived supply of such information to users through development of a user-sensitive database. This will generate subsequent and continued demand once benefits are recognized. There is a need to balance supply and demand factors in the process to accrue sustainable results. The initial thrust in the pilot project was heavily supply-based (production of the PSIP manual, development of the database, and the like.) It lacked the critical elements necessary to effectively generate demand. - There is a need to build consensus, ownership and commitment among the key stakeholders. * There is a need to build capacity in M&E at both the national and line ministry levels. Such strength will foster professional- ism and create support at all levels. * All line ministries must have M&E units or entry points. * The link between M&E and resource allocation should be made clear to all stakeholders, and particularly to implementers, so that they can provide all the necessary information. Monitoring and Evaluation: The Case for Zimbabwe 69 * The link between M&E and decisionmaking at the apex level must be apparent, and this should filter down to the ministry level. Emerging opportunities for M&E Performance management: As part of its overall reform program, the government of Zimbabwe is introducing performance manage- ment across the entire public service. The program has been commissioned as part of the overall attempt to reduce expenditure and contain civil service outlays within budgetary allocations, and in the process to engender an output-oriented, efficient, and effective management system. This is all centered on M&E as it oscillates around the issue of performance measurement. The process entails defining SMART objectives. It is indeed one of the major building blocks for evaluation, because these objectives provide benchmarks for any subsequent evaluation. As Mark Baird puts it, 'No public sector can afford to overlook the importance of clearly defining its objectives and priorities, assessing performance and results against well defined benchmarks, and changing a bureaucratic culture into one that stresses client service and achievements of results' Decentralization: Zimbabwe has adopted decentralization as a development strategy, and this demands a strong M&E system to ensure that the center is constantly well-informed. A number of permanent secretaries have already expressed their desire to put in place a strong M&E system. Sector programming: Government is moving from the project approach to sector programming. This is a budget support managed by government institutions according to national priorities and rules and regulations. One major strength of sector programming is that it facilitates evaluation, particularly impact evaluation. Basket funding assists in eliminating excessive, conflicting, and multiple donor requirements. It also assists in reducing the danger of diverting 70 Molly Mandinyenya scarce country evaluation capacity to satisfy donor requirements (which might not align with the greatest benefit from evaluation in the government). Establishment of a national working group on financial management and accountability: The government of Zimbabwe has decided to establish a National Working Group (NWG) on financial management and accountability to buttress the sector programming approach. The vision of establishing a Cupertino forum with an NWG is to coordinate activities in national financial management and accountability to maintain and improve the national management of development activities. Some of the key objectives of this program are to: * Establish an environment conducive to the accountability of aid funds * Strengthen the government's implementation and monitoring capacity; streamline procedures to improve efficiency in aid utilization * improve impact evaluation * Improve the efficiency of the use of technical assistance * Develop an integrated aid management information system according to requirements specified by users at all levels. Zimbabwe Vision 2020: Evaluation and good governance are intricately correlated. By selecting good governance as a key aspiration, Zimbabwe is inculcating a culture for M&E. ZIMPREST: A special committee comprised of all the major stakeholders has been set up to monitor the Zimbabwe Programme on Economic and Social Transformation (ZIMPREST). This body is chaired by the Office of the President and Cabinet. A monitoring model was developed to reveal policy effects and indicators. All these developments pave the way for a strong future M&E system. Monitoring and Evaluation: The Case for Zimbabwe 71 Challenges * To generate effective demand for evaluation information for use in decisionmaking at political and managerial levels * To develop a comprehensive M&E database * To establish M&E unitslentry points in all line ministries. * To set aside budget for M&E operations * To maintain the planned expenditure level. Continuous or unpredictable cuts in the capital budget to accommodate unplanned expenditures make a mockery of M&E because operations will be forced to slow down to make way for other "emergencies" * To replicate the completed pilot project governmentwide, and hence to develop a strong national M&E system * To make M&E a part of the management process (and not to view it as a time-consuming request by NEPC) * To fully integrate M&E within all the NEPC functions * To eliminate all existing duplications by other stakeholders that inhibit M&E strengthening. Conclusions A new culture is emerging in Zimbabwe that is receptive and eager to apply M&E principles in the decisionmaking process. The pilot project and a number of other parallel initiatives have improved awareness and appreciation for M&E. 72 Molly Mandinyenya Annex 1: Coordination Bodies Coordination bodies will indude: * Public Accounts Committee (PAC) * The Cabinet Committee on Development (CCD) * Foreign Investment Committee * Foreign Exchange Allocation Committee * Interministerial Committee on Resettlement * National Agriculture and Rural Development Coordinating Committee. M&E efforts of national coordination bodies These bodies were meant to provide interministerial cross-links at the national level. The Public Accounts Committee of Parliament examines the audited annual reports of line ministries and agencies. The Interministerial Committee on Resettlement successfully coordinated and monitored the timely implementation of the resettlement program. The National Agriculture and Rural Development Coordinating Committee, established in the early 1980s, was actively involved in the planning and monitoring of agriculture and rural development projects in the country. Monitoring and Evaluation: The Case for Zimbabwe 73 Annex 11: Outline of a possible evaluation framework for Zimbabwe NEPC evaluation function * Sets evaluation directives and standards * Sets guidelines for annual evaluation plans of line ministries and key agencies * Develops and organizes training programs and workshops to disseminate methods and lessons of a cross-cutting nature * Reviews terms of reference (TORs) prepared by ministries for evaluation of major programs * Reviews the quality of evaluations done by ministries * Undertakes or participates in selected evaluations of complex projects-for example, where more than two ministries are involved - Uses key performance information and evaluations from ministries to inform OPC, MOF; draws implications for policy and new program formulation; monitors performance across sectors and systemic issues; and delivers information before budget allocations * Follows-up on the implementation of evaluation recommendations - Prepares annual report on evaluation results and use of evalua- tion findings. Evaluation function in line ministries - Defines evaluation plans covering all major programs, taking into account evaluations by third parties * Defines (jointly with the sector units of NEPC) the key perfor- mance indicators to be used to assess progress and results These should be defined at the design/appraisal phase of new programs, and retrofitted to ongoing programs and projects 74 Molly Mandinyenya * Delivers key performance information to NEPC semiannually for major programs and major evaluation studies, according to annual evaluation program * Budgets resources for M&E tasks * Establishes feedback processes to ensure use of evaluation results within the ministry * Within each ministry, line departments monitor key perfor- mance indicators and submit results quarterly to internal central evaluation unit * Sets up evaluation review committee at senior level to review evaluation reports, and periodically to review performance information. Organizational implications * Establish the evaluation function apex of each ministry to focus on carrying out evaluations of major programs and projects and the analysis of performance monitoring information. The central evaluation function could initially be attached to central planning units. - Establish performance monitoring responsibilities for projects and programs in line departments within each ministry. - The Auditor General will carry out a biannual process audit of the evaluation system established and receive evaluation reports for major projects. Monitoring and Evaluation: The Case for Zimbabwe 75 Annex Ill: Criteria for selecting projects and programs for evaluation Because of financial constraints and lack of capacity in implement- ing agencies and the NEPC, it is not possible to evaluate every project or program. A selection of projects for evaluation has to be made. To do this, the following criteria may be used: * Projects/programs involve large financial investments. * Pilot projects/programs are likely to be replicated in other areas. * Projects/programs have undergone major modifications. * Projects/programs are assigned high priority by their sectors. * Projects/programs have a high risk factor for operational problems. * Projects/programs are likely to be extended beyond their planned period of implementation. * Projects/programs are recommended for mid-term reviews. * Projects/programs are critical to earning foreign exchange and generating employment. * Clusters of projects, all of which address the same, similar, or closely related issues. * Projects/programs and projects that have been, or are, success- fully implemented. * Programs and projects have some impact on the environment. 76 Molly Mandinyenya References Khan, M. Adil. 1989. "Search for Efficiency: Monitoring and Evalua- tion Experience of Zimbabwe' Mackay, Keith. "Public Sector Performance-The Critical Role of Evaluation. Selected Proceedings from a World Bank Seminar." - - - 1998. "Supplement to the Task One Situation Analysis Report' March. - - - 1996. "Task One Situation Analysis Report' September. - - - 1998. "Monitoring and Evaluation Guidelines, " May. World Bank, Resident Mission. World Bank, Resident Mission. 1994. "Mission Report of the Nether- lands Economic Institute for the Zimbabwe Resident Mission of the World Bank" Monitoring and Evaluation: The Case for Zimbabwe 77 9 Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana Isaac Adams Within the past decade, the Ministry of Health has implemented a series of related reforms involving sector policy formulation, ministerial restructuring, and administrative reorganization aimed at improving the capacity to deliver effective and efficient health service. A number of important initiatives have been realized during this period; key among these is the publication of a Five-Year Program of Work (1997-2001), which represents the culmination of the strategic thinking that has evolved over the past few years. This process was characterized by the definition of policies and priorities and a series of consultations involving stakeholders, leading to the development of a Medium-Term Health Strategy. The passing of Act 525 of Parliament (The Ghana Health Service and Teaching Hospitals Act, 1996) completed the restructuring exercise that began in 1993 and fulfills the constitutional requirement for setting up Ghana Health Services as an executing agency within the health sector. By this provision, the Ministry of Health de-links itself from service provision and focuses on sector policy formulation and monitoring. Within the Ministry of Health itself, the reorganization has involved a clearer separation and definition of functions, thus allowing managers more flexibility in decisionmaking and the use of resources. The commitment by donors to contribute to a common fund that will support the sector program in an integrated manner has created a sector funding scenario where a set of priorities is shared by all 79 stakeholders, leading to a convergence and agreement on key management and administrative systems. This has also led to a new dimension of donor coordination that seeks to minimize duplica- tion and to channel donor resources to areas of real need and involve donors more broadly in the decisionmaking process of the sector. Applying the concept that health is much wider than the mandate of the Ministry of Health, more effective linkages have been established with other sectors, including the private sector, and more practical forms of community involvement are being developed at all levels of the health care delivery system. With the achievement of these essential reform elements, new demands for monitoring and evaluating the performance of the health sector have arisen. I hope to identify these demands and outline the approaches adopted in the context of the Health Sector Reforms Program in Ghana. I will also discuss some of the central issues critical to the development of an effective M&E capacity. The thrust of the health sector reforms The impetus for reforms in the health sector is primarily derived from the overall national long-term vision for growth and develop- ment, "Ghana Vision 2020," which aims at a middle-income status for the country by the year 2020. Among the five main areas defined for priority attention is the need to maximize the healthy and produc- tive lives of Ghanaians. It also derives from the several challenges identified as constraints to improving health status in the medium- term. Key challenges are listed below. Limited access to basic health care in the face of high population growth and limited social infrastructure. At the start of the reforms, over 40 percent of the population lived more than 15 kilometres from a health facility. Even where facilities existed, services were often not available because of lack of personnel, inadequate facilities, the high cost of services, or a combination of these factors. 80 Isaac Adams Inadequate service quality, partly because of the frequent shortages of drugs and essential supplies, the absence of emergency services, and poor staff attitude toward patients. The absence of institutional quality assurance programs, peer review, and performance monitor- ing systems also led to persistently low-quality services at all levels. Inadequate funding of health services, especially when available resources have been decreasing in real terms over the years. Specifi- cally, the funding of the nonwage recurrent costs and the mecha- nisms for accessing funds has meant that considerable resources remain unused, while substantial needs are not met. Under such circumstances, donor funds tend to replace rather than augment government budgetary allocation. Inefficient allocation of resources, in addition to the problem of inadequacy, has also created a situation where resources are poorly linked to stated priorities. Expenditure on secondary and tertiary services does not reflect the focus on primary health care, while the bulk of resources are spent on urban and periurban services. Poor collaboration with communities, other sectors, and the private sector, which has led to the health sector doing things for- rather than with-the community and not taking advantage of the growth of the private sector to expand services. Especially at the district level, the lack of collaboration with other sectors creates a situation where the performance of the health sector is unfairly assessed for the lack of performance (or otherwise) of other sectors. The health sector reforms are based on carefully selected strategies aimed at addressing these challenges and achieving set targets by the year 200 1. Components of the sector reforms The reform program is being articulated through three broad components. These components deal essentially with internal Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana 81 changes aimed at improving the performance of the health sector and external relations to ensure improved health outcomes. In line with the country's new, decentralized development planning system, district planning authorities, under the District Assemblies, are responsible for district plans, which are made up of submissions from decentralized ministries and departments. Ministries and sector agencies prepare sectoral plans and programs congruent with national development goals and objectives. For the purposes of planning, the overall framework is determined by national priorities and sector policies and priorities. Within this framework, sector policy review and policy formulation constitute a key component. In the Ghanaian context, this compo- nent is characterized by the development of a strong capacity for policy analysis and the presentation of a sector position on key issues relating to institutional arrangements and organization, resource mobilization and use, and service delivery. A second component is the agreement on management arrange- ments with respect to implementation of the policies and priorities identified. Although some of these arrangements, especially the funding mechanism, constitute a departure from traditional modes of practice, they aim at targeting efforts and resources to achieve specified outputs. This component of the reforms centers on common arrangements for disbursement, planning and budgeting, financial control, accounting and audit, procurement, and perfor- mance monitoring. The third component of the reform program is the establishment of appropriate linkages with other sectors to improve performance and to have a more effective influence on health status. The reforms in the health sector provide for an increased effort in advocating health concerns and promoting integrated planning, implementation, monitoring, and evaluation of programs. 82 Isaac Adams Monitoring the five-year program of work There are different systems for M&E in the Ministry of Health. The monitoring system is based on routine service data, supplemented by infrequent monitoring visits. Generally these systems focus on tracking inputs and activities rather than outputs, and because they are parallel and uncoordinated they lead to inefficient and expensive data collection exercises involving peripheral staff. Health status evaluation is carried out by agencies outside the Ministry of Health; the Ghana Statistical Service is the main agency. But major assessments do not cover all the health status indicators of interest to the health sector. The reform program, by design, highlights the need for performance monitoring, with its explicit statements on planned results and measurement of actual achievements. In a way, this presents a context for monitoring that is radically different from the existing system. The key management arrangements also tend to provide a frame- work for the discussion of activities related to M&E, particularly the arrangements that put the Ministry of Health in charge of implemen- tation. Thus the monitoring frameworks that have evolved have done so to address the concerns of partners. On the part of donors, the system ensures that while their influence in implementation in specific and traditional areas is limited, this is compensated by their involvement in monitoring all aspects of the program. For the Ministry of Health, the system demonstrates, in spite of lack of capacity, the commitment to achieve the set targets and to isolate implementation issues for discussion. Setting the M&E agenda The M&E system is thus guided by two broad issues. While the content is guided by the provisions of the Five-Year Program of Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana 83 Work, the process is greatly influenced by the management arrange- ments, especially where new demands are made on the system by the shift from traditional responsibilities. The main factors determining the agenda for monitoring the Five- Year Program of Work can be classified as: * Health policy development * Health financing and financial management arrangements * Resource allocation * Organization and level of decentralization * Management and management systems * Service delivery * Regulation * Human resource issues. Health policy development The need to review the performance of the health system and to develop a vision for the future requires an assessment of the sector as a whole. This important activity is usually undertaken mainly by donors as background information and justification for projects. While these project documents act as sources of information about the sector, ownership is sacrificed and the documents tend to be limited in focus, with undue emphasis on areas that will enable easy justification of the project. But with the development of a sector program with broad-based ownership, the need to assess sector performance in terms of policy implementation becomes para- mount. Three key issues tend to drive the monitoring system. To achieve fruitful reviews, the system must first be able to monitor policy implementation and assess policy outcomes. With the traditional emphasis on annual planning, this becomes a daunting task. And the lack of capacity for policy analysis tends to limit the scope for this area of monitoring. 84 Isaac Adams The issue of setting priorities is also central to the scope of monitoring. The process of identifying priorities at times of scarce resources usually leads to new and sometimes controversial policies that need to be assessed at short intervals. The emphasis on policy implementation and policy outcome in a sectorwide approach is a departure from the situation where monitoring focuses on inputs (usually financial) and the completion of activities. This has exposed a lack of capacity, especially at the regional and district level (but more surprisingly at the national level) and has triggered an apparent willingness by donors to provide technical assistance to fill these gaps. A major concern is the move to establish territories in areas of technical expertise by donors, which may reintroduce their association with specific program areas. Health financing and financial management arrangements The new funding arrangement has also raised concerns for monitor- ing, especially among the donors. With the common fund arrange- ment or pooled fund system and full integration of plans, budgets, financial flows, and accounting systems, donor funds would be indistinguishable from government budgetary allocations and would form a part of the published estimates and be released through the Treasury system. This raises some concerns, which again find their way into the monitoring agenda. Because of the apparent loss of control over donor funds, there is a need to ensure a high level of confidence in the accounting and financial management systems. This is reflected in the scope, frequency, and level of involvement of donors in monitoring expenditure in the sector. The concern that external assistance would substitute rather than supplement government funding has led to a strict monitoring of government allocation to the sector from different angles. Measure- ment of allocation to health as a proportion of the total government Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana 85 budget and measurement of the annual increase in allocation in real terms are very high on the priority list. Although this is usually expressed as a way of ensuring that government commitment to the sector is maintained, and as a measure of sustainability, it is actually leverage to increase government expenditure on health. Donor funding for nongovernmental organizations (NGOs) and the private sector also represents a part of the sector framework and program of work. With the arrangement that such funding will be channeled to the ministry, which will then contract with third-party providers to deliver services on its behalf, capacity building or strengthening of the regulatory framework will be needed. The pooled fund system, and financial management system as a whole, imply considerable capacity building, particularly at lower levels. To begin with, Budget and Management Centers (allied to Cost Centers) are required to meet readiness criteria before being designated eligible to receive and manage pooled funds directly. Those who do not meet minimum standards are targeted for extra support, and in the iterim will have funds managed on their behalf by the regions. This has implications for-accounting, internal controls, and audit. Planning based on total resources means that internally generated funds from cost-sharing, health foundations, and health lotteries have to be taken on board at an early stage in the planning process through estimates. Providing these estimates will require that revenue generation be monitored appropriately over time, along with other variables such as disease patterns and use and availability of inputs, including human resources. Resource allocation Resource allocation raises a number of concerns apart from issues related to transparency and the need to correct inequity in distribu- tion. The link between allocation and priorities needs to be con- 86 Isaac Adams stantly tracked if the health goals for the medium-term are to be met. This is a reflection of the currently insufficient linkages between plans and budgets. Ensuring the shift of resources to lower service delivery levels also raises concerns that tend to attract interest in monitoring resource allocation. Even in a sectorwide arena, donor interests in specific areas of service still exist. This may be a result of their ideologies and donor- government policies, or their sources of funding. The pressures to allocate to programs and special interventions will always be there. These are usually contained in requirements related to shifts of resources from wage to nonwage recurrent expenditures and allocations to cater to special needs. Organization and level of decentralization The decentralized health administration structure creates the need for Budget and Management Centers as the primary units for planning, budgeting, administration, and service delivery. Although these units are all located within defined political and administrative structures-that is, the district of the lowest geographical demarca- tion, which is the subdistrict-this is not the same as the lowest political demarcation, the local council area. Thus, while the District Health Administration can identify with the District Assemblies, the Subdistrict Health Teams have very little political support at the local level. This signals the problem of accountability of the health system to the community. The subdistrict feels more accountable to the District Health Administration than to their local community. As a result, the yardstick for performance shifts from what the community per- ceives to what the district demands. This lack of accountability shows in the way information is collected and managed at the subdistrict level. Data are collected only for transmission to the District Health Administration, and are not used to address local needs. Monitoring the subdistricts is largely reduced to accounting Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana 87 for inputs and how their outputs aggregate to present a district performance. In the same way, feedback systems at this level serve one of two purposes. The first is to show how the subdistrict ranked in contributing to district achievements; the second is to trigger reaction from the District Health Administration. At the district level, the health sector is increasingly linked with the District Assemblies through the Social Sector Subcommittees. However, the relationship with the subdistrict is replicated with the District Health Administration, which is more attached to the Regional Health Administration. Each district is made up of an average of eight Budget and Manage- ment Centers (six subdistricts, one district hospital, and the District Health Administration), for over 800 Budget and Management Centers at the district level. This number presents logistical prob- lems of data collection and analysis at the central level. The dilemma is how to present information from the district or regional level in a way that can still be used to assess these levels and the various Budget and Management Center categories. Although a lot of work has gone into ensuring better integration of services, the way reports are presented and analyzed still follows five vertical lines. Somehow these are reinforced by the strong presence of donors, whose policies require that their inputs be used solely for specific interventions. As a result, monitoring is better organized along vertical lines and within programs than in an integrated manner. The reorganization of the health sector brought into being new directorates and gave old directorates new responsibilities. For instance, the human resource division (which focused on training) assumed responsibility for human resource management and administration. The planning division (which was mainly focused on budget development) assumed an added responsibility for policy development and monitoring. Although these directorates had a change of objectives and added responsibility, they remained strong 88 Isaac Adams in their traditional areas. This is reflected in the way their perfor- mance is assessed and how reports are presented. Management and management systems The reform program has led to the development of several manage- ment systems and units within the health sector. The scope for performance monitoring is now very wide, with implications for capacity; development of control systems; and emerging policies, standards, and guides. These developments have led to the recruit- ment of a wider range of professionals and to the adoption of new performance measures, some of which are foreign to the health sector and will need interpretation in a health care delivery context. The major areas affected are financial management, equipment management (biomedical engineering), estate and transport man- agement, and procurement of goods and services. Work in all these areas has led to the development of programs that need to be monitored and evaluated periodically. This also raises issues of integrated planning-for instance, ensuring a relationship between the capital program and the recurrent expenditure. By far the biggest concern has been the development of capacity to monitor areas new to the health sector, such as the capacity to monitor consultants engaged in the capital program and managing consultants on technical assistance programs, especially when that activity is funded by a donor. Service delivery Service delivery under the the reform program is guided by a number of policies and emphases. Recognizing that resources will be limited, much of the investment is redirected toward priority problems. Within that context, issues of access, quality, and effi- ciency are being addressed. Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana 89 Prioritization in itself requires strict monitoring of the epidemio- logical and socioeconomic importance of health problems. This process is usually greatly influenced by media intervention: in many cases, problems take on importance and attract resources because of this exposure. It is also influenced by the changing social structures in society. When a particular group becomes influential and pro- duces opinion leaders, the performance of the health sector is usually judged by how well their expectations (and not the priorities) are met. Access to services deals with the availability of facilities and services within a defined range and travelling time (eight kilometers or one hour of travel). It also addresses the issue of financial access by introducing exemption mechanisms for those who cannot pay. One main issue is that these indicators represent a measure of access to those who use the services or approach the facilities and demand services. A large group of people is left out because of the barriers presented by some policies. For instance, the policy of cost recovery prevents people who do not have money from receiving services. Even if the attempt to improve access is made through outreach clinics, these people will still not take up services unless the health sector changes the way services are brought to people. Access to services does not usually capture health providers outside the public sector. Many of the activities of private, for-profit enterprises (and even the mission hospitals) are not included in the information available to the public sector. Quality is seen in many ways by different people, but professional standards have traditionally been the measure for quality in the health sector. So far, mechanisms barely exist for articulating patients' perceptions in measuring quality of care. With the involvement of many non-health professionals in deter- mining health policy and managing the health sector, efficiency measures have tended to be strict in economic terms. This has made 90 Isaac Adams the indicators either meaningless or difficult to measure (or has generated a hostile reception by health workers who measure efficiency by more social and professional parameters). Regulation of services One problem of the health sector is the nonenforcement of much regulatory legislation. In this regard, the system for regulating professions and services has led to the introduction of new bodies, new standards, and the review of roles and responsibilities of existing bodies and standards. On the part of donors, this has led to the development of a code of conduct and several memoranda of understanding as means of maintaining a level of consistency. Human resource issues The focus on performance has raised issues of human resource management at all levels. First indicators of human resource distribution and labor mix for each Budget and Management Center have become more prominent. Because of the requirement for total resource management under the common fund arrangements, the wage bill for the sector has come under closer monitoring. Summary of key determinants Concerns regarding competence and commitment of the health sector helped to shape the M&E system in the health sector. Al- though the processes and scope of the reform agenda played a very important role, the critical issues address donor concerns about the reduced opportunity to micromanage assistance packages. This has continued to define the extent of donor participation in the moni- toring and evaluation of the health sector. Appraisal missions to assess management capacities in specific areas, such as procurement and financial management, helped to bridge the confidence gap, Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana 91 streamline processes, and set the stage for more joint missions in other areas. Donor focus is now shifting to the use of such missions to help formulate decisions on further assistance. Bringing together several donors to support one sector program creates a situation where several interest areas can be probed. This allows for several performance measures to be presented and used in measuring sector performance. It also allows for negotiation in arriving at an agreed set of limited indicators for assessment. For Ghana, 20 indicators were agreed on as a manageable set to be used in assessing sector performance. It was obvious that these indicators were selected for reasons other than the ease of data collection. Analysis shows that donor interest was instrumental in the selection of indicators. It also represents an attempt to keep their main programs in focus. The result was that some of the data required were not available or were not routinely collected in the sector, thus making it difficult to present a meaningful assessment of the sector. The sectorwide approach leads to downsizing and reduction of donor presence in the host country and, consequently, less involve- ment in program implementation. The alternative is to be more involved in sector M&E. On the whole, this has contributed to the shift from individual and program-oriented monitoring to a joint approach involving more than one donor and the Ministry of Health. The performance-monitoring framework Monitoring the performance of the health under the Five-Year Program of Work essentially follows the three components of the reforms, and can thus be seen from three dimensions: * Monitoring advances in policy, institutional, and systems development * Monitoring the performance of Budget and Management Centers * Assessing the health status of the population. 92 Isaac Adams The process involves joint reviews with strong donor participation, using agreed parameters including the sectorwide indicators. The reviews form a main input into a mandatory biannual meetings in which donor participation is very strong. At these meetings, govern- ment presents a multisectoral group, including the Ministry of Finance, the private sector, and other stakeholders. The first meeting, held around April, reviews performance of the previous year and maps out areas of emphasis for the coming year. The second meeting, held in September, reviews the half-year performance and the program of work for the next year. These two meetings have evolved into the main forums for review and agree- ment on donor/Ministry of Health conduct. The main constraint in the overall exercise is the focus on donor requirements and expectations. The two reviews provide a snapshot of the system and seek to address donor concerns, rather than to respond to public expectations. Monitoring progress in the reform process The key elements in tracking progress of the reform program have to do with whether the activities planned actually take place, the overall systemic effects, and how these effects contribute to overall advances in meeting the objectives of the reform program. It is important to see this aspect of monitoring as a critical means of shifting emphasis and nominating annual priorities within the framework of the Five-Year Program of Work. In this regard, three main areas are monitored. These are: * Policy development and implementation * Institutional reforms * Systems development. In these areas the systems for monitoring record concrete achieve- ments through biannual assessment of the sector involving donors. Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana 93 Key areas for action are presented as terms of reference for each level. Several policy and systems indicators and benchmarks are used in this regard, and Budget and Management Centers are required to respond. These responses are collated, and follow-up missions are made. The main constraint in this process is the inability to measure the net effects of these policies or the influence of institutional develop- ment on improving access, efficiency, and quality of care. These have to be measured in another process, and so major decisions taken at the biannual forums do not benefit from them. Monitoring performance of Budget and Management Centers The main concern in monitoring the performance of the Budget and Management Centers is to account for resources in terms of achieve- ments. This forms the core of the performance monitoring system and the basis of the service agreement with each Budget and Management Center. In this regard it was an essential first step to clarify the functions and responsibilities for each center and to identify key achievement areas in management, service delivery, and any planned new initiative. The process of negotiation in setting performance targets is essential to ensure that these are congruent with overall sectoral objectives and to demonstrate progress toward attainment of the objectives of the Budget and Management Center. The information required for these is mainly provided by routine data collection, which is depen- dent on the information system operating at each level. For reporting purposes, a limited set of indicators derived from the sector indicators have been nominated for each center; they are required to pledge a level of performance through targets for the 94 Isaac Adams year. These constitute the basis for signing a service agreement with the Ministry and the basis for the allocation of funds for the year. Overall monitoring is then based on the achievement of these targets. The emphasis on performance assessment and targets in designated priority areas suggests the need for monitoring the continued pertinence of the Budget and Management Center's responsibilities, their contribution to the objectives of the program of work, and their performance in executing their designated responsibilities. To achieve this, a much more comprehensive and composite measure of performance (which will include a measure of technical perfor- mance, policy implementation, and efficient use of resources) must be identified. The need to develop a performance culture cannot be overemphasized. It is very important to recognize that for managers at the level where most of these measures are required, the genera- tion and use of relational data is difficult. These managers, at best, are used to single-dimension indicators of achievement. One critical issue that needs further study is how to reward perfor- mance or punish lack of performance. In the health sector, the inability to do so means that it will be difficult to develop and sustain the performance culture. For each Budget and Management Center there is a set of core and discretionary expenditures. The core expenditures represent those that are made by the mere existence of the Budget and Management Center. The discretionary expenses are made in relation to decisions to improve the objectives of the center and to provide additional service. Since the core expenditure cannot be cut or increased without recourse to legal means, the discretion- ary expenditure is what can most easily be used. In the case of lack of performance, cutting this will mean that the patient or the recipient of the service actually suffers. The inducement to perform is therefore very limited. Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana 95 Assessment of health status and health service outputs Monitoring Budget and Management Centers demonstrates regularly how individual units are performing. There is a need to examine sectoral performance, not as aggregates of the performance of the Budget and Management Centers. National patterns and trends need to be analyzed to inform resource allocation decisions, adjust targets, and review the effectiveness of policies and strategies. Assessment of health status is done outside the Ministry of Health by the Ghana Statistical Service. Two main assessments are carried out. The Ghana Living Standards Survey is done every five years, and the Core Welfare Indicator Questionnaire Survey is done annually to provide updated information for decisionmakers on a set of simple indicators for monitoring poverty and the effects of development policies, programs, and projects on living standards in Ghana. The process is informed by the health sector through the introduction and review of indicators for the survey. The scope of information generated by these surveys does not address all the areas of interest to the health sector. A formal assessment at the end of the five years is therefore planned. This will be carried out by the Health Research Unit, with support from the Ghana Statistical Service. Special studies have also been commissioned in community partici- pation, gender issues, poverty alleviation, and patient perceptions of health sector performance. Some of these studies have already led to the development of position papers and a review of policies. How- ever, these reviews are not institutionalized, and the process of nominating research questions has not been clarified. 96 Isaac Adams Issues central to monitoring In applying the framework described here, several issues have emerged as critical to organizing and implementing an effective monitoring system. Information management and reporting The lack of adequately organized repositories of information, as well as information processing capacity at all levels, stands out as a major issue. While this may be viewed as a legacy of the old civil service system, where managers collected information for the purpose of transmission, the reasons for poor-quality health information also stem from several other factors. These include the inadequate perception of health workers of the usefulness of their information- collection activities. Although some management data are collected at all levels, including information on budget, personnel, transport, drugs, and other supplies, systematic analysis largely ignores these issues and concen- trates on health status and health care use data. Thus, information support for planning is usually weak. The management information collected is not always relevant to the day-to-day needs of managers. The data collection systems in place collect input and output information differently, thus giving little scope for performance assessment. Without this, there is little incentive to collect and analyze data in a meaningful fashion. Cooperation and information-sharing among programs within the Ministry and with different agencies is minimal because of different priorities and the multiplicity of indicators and data collection procedures used. Alhough this may be a reflection of the planning process, it is important to view it as an indication of the vertical program management systems still in place. Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana 97 In addition to these broad constraints, information management in the health sector suffers from other specific setbacks. The first is the poor communication between users and producers of health statistics. The health statistics unit operates not in support of planning activities, but as a unit that produces statistical information as an end in itself. The information produced bears no relationship to the current priorities and focus of the Ministry and is not geared toward assessing performance. Relevant feedback within institu- tions and among levels is also virtually nonexistent. This has led to a situation where information is largely organized within departments and programs to satisfy specific program requirements. Data collection, reporting, and analysis are thus uncoordinated. While health statistics are not comprehensive as a management support tool, the shortage of statistical manpower at each level makes for the lack of an integrated health information system. In addition, most current statistical forms are outdated, irrelevant, or duplicative. A review of reporting formats within the Ministry shows that health facilities are required to complete between 36 and 40 different forms from 15 different units and programs for submis- sion to higher levels. Medical care requires that 8 reporting formats be completed, while disease control requires 14. Environmental health, health education, and supplies (excluding drugs) have none. Information on about 90 percent of these forms is submitted as raw data. The current information system does not collect data from tradi- tional and private practitioners and institutions outside the Ministry of Health. Data on populations without access to public health facilities or who use private sector facilities are not reported in the public sector. These omissions create a problem for policy formula- tion and the development of strategies, and provide an inadequate basis for planning and resource allocation. Another basic problem of the information system is the low rate of data and report submission from the periphery. While this may be linked to lack of capacity at lower levels, the basic reason may be the 98 Isaac Adams excessive demand for data recording and reporting placed on service staff. Much of these data are not used for tasks performed at their level. For instance, information on communicable diseases collected at the periphery is not organized and analyzed for local action in prevention and control. It is merely collected for the purpose of onward transmission to the Center for Health Information Manage- ment. Commitment and validity are sacrificed in such circumstances. Indicators for performance monitoring The absence of a composite measure of performance has already been highlighted. In addition, issues of interpretation, definition, and measurement have been raised. The agreed sector indicators have undergone two reviews to address these issues. There seems to be an unwillingness to do a drastic review once these have been nominated and agreed. The indicators tend to take on a life of their own and determine the nature of the information system. The merits of the opposite scenario need to be examined. Difficulties with interpretation, definition, and availability have been highlighted, but a further complication is the inability to establish a baseline for measures of performance. It is difficult to make any meaningful judgement when reports are submitted, and performance agreements, which are major tools for introducing a performance culture, are thus not seen as important. The linkage of indicators to responsibilities as a framework for assessing performance needs to be developed. In general, monitoring performance by way of selected indicators seems to satisfy donors and the higher levels. It does not change the situation where demands for information from the periphery are the norm. As an interim measure, these selections should aim at providing scope for management units to do self-assessment, while satisfying the supervisory levels and fulfilling their obligations in service agreements. Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana 99 Monitoring and health research Although health research has developed considerably at both the national level and within some districts, there is very little linkage between research outcome and policy. This research and policy gap arises because the results of monitoring exercises do not feed into the process of identifying research areas. To strengthen monitoring, therefore, there must be a clear relation between monitoring and health research. Issues central to evaluation Although a major evaluation has not taken place since the beginning of the reform program, lessons from annual reviews have indicated that the process would have benefited more if the following steps had been taken. Representation and scope of participation: Donors nominated various consultants, who acted not as representatives of the indi- vidual donors but as experts working for a joint group of clients. The leader of the team was nominated by the Ministry of Health. The strategy worked very well, except that ownership was compromised because managers in the sector saw the exercise as a donor-initiated activity to "check" them. A joint mission would have been received better. Perhaps as a result of this, the purpose of the annual review has become increasingly blurred. It is seen as serving only the requirement of the management arrangement for the pooled fund, and is viewed as more of an audit than a performance review. Convergence on the terms of reference: This showed the desire of donors to hold on to their traditional positions. On the part of donors, the review should be organized around themes and key issues. The Ministry of Health saw the indicators and the objectives of the annual program of work as the point of departure. Although agreement was reached in favor of the Ministry's position, the 100 Isaac Adams consideration of donors had already gone into selecting experts. It was therefore not possible to totally eliminate their concerns from the resulting review. Baseline information: This was an important issue that continues to affect the quality of periodic reviews. Population data are seri- ously in need of updating, and basic information on service uptake and other social parameters is not available. This makes it difficult to make any meaningful judgement of performance, and therefore of the strength of any M&E system in place. Country Experience: Sectorwide Approach to M&E in the Health Sector in Ghana 101 10 The Agricultural Development M&E System in Morocco: Objectives, Content, and Implementation Status Abdelaziz Belouafi As part of the liberalization of the national economy and the new policy direction of the Ministry of Agriculture, information produc- tion and M&E functions within the department are vital for better management of the sector. In response to the need for a decision- support system to enhance the policy direction of its development program and actions, the Ministry has established M&E tools and organization. In 1994 the Ministry initiated the establishment of an Agricultural Development Monitoring and Evaluation System (SSEDA) as a tool to guide budgetary decisions, agricultural policies, programs, and actions in the sector. At present, the system covers 21 of the 59 regional agencies in the country, accounting for nearly 60 percent of the domestic output and more than 75 percent of total exports. Significant progress has already been made. The initiative has helped to lay the foundations of the system and determine methods to make it fully operational in the future. This presentation concen- trates on the objectives, contents, and products of the system. System objectives SSEDA, a system for preparing and piloting agricultural policies and government action programs, concerns the objectives, policy direction, and projects prepared under the Ministry's annual programs. It covers all the central and regional agencies of the Ministry and is designed to satisfy its need for information and to serve as a decision-support tool. 103 The system is to provide the necessary information for measuring progress in the implementation of established programs and in achieving set goals. It must provide information about the difficul- ties encountered and help to assess the causes of the positive or negative results observed and the effectiveness of government (and possibly private) intervention. The system should allow for adjust- ments to be made to better guide actions and optimize resources. System information content SSEDA revolves around 30 themes that cover most of the activities undertaken by the Ministry and the major associations and indi- vidual agents of the sector. The themes are classified under the following five categories: * Development objectives * Production objectives * Mobilization and conservation of resources * Production systems and technologies * Production environment and support. System products The major products of the system, designed to serve as sources of information and decision-support tools, are the databases, scorecard, and M&E reports. These three products should enable: * The constitution of a source of information on the sector (national and regional databases) for rapid and integrated consultation * The establishment of decision-support tools (scorecard) in the area of technology and in matters of economic policy for decisionmaking at the central and regional levels of the Ministry * The production of regular and as-needed information on M&E of agricultural development, at both the national and regional levels. 104 Abdelaziz Belouafi Box 10.1: List of agricultural development M&E themes (SSEDA) A - DEVELOPAMENT OBJECTIVES I Food security 2 Exports 3 Overall agricultural balance 4 Agro-industry 5 Farmers' income 6 Employment in rural areas 7 Living conditions of rural dwellers. B - PRODUCTION OBJECTIVES 8 Plant production 9 Animal production 10 Forestry production. C - MOBILIZATION AND PRESERVATION OF RESOURCES I i Hydro-agricultural development 12 Development of farmlands 13 Development of grazing lands 14 Productive forestry development 15 Soil conservation and land protection. D - AGRICULTURAL PRODUCTION SYSTEMS AND TECHNIQUES 16 Land structures and development methods 17 Rotation, soil use, and irrigation 18 Farm tools and equipment 19 Crop inputs and plant protection 20 Genetic improvement of animal species 21 Animal health 22 Herding and animal feed. E ENVIRONMENT AND AGRICULTURAL PRODUCTION SUPPORT 23 Climatic conditions 24 Production price and outlet 25 Economic and financial incentives 26 Agricultural investment, financing farm operations 27 Availability and distribution of production factors 28 Extension and professional organization 29 Agricultural research 30 Senior staff training and agricultural sector supervision. The Agricultural Development M&E System in Morocco: 105 Objectives, Content, and Implementation Status In time, summaries of the regional databases will supply most of the central databases and will be interconnected by a computer network for transmission and dissemination of information. Similarly, the indicators and the M&E reports produced at the regional level will help in the analysis and interpretation at the national level. System actors and users The actors and users of the system are primarily central and regional agencies of the Ministry and, currently, to a lesser extent, associations and private individuals. Central and regional agencies of the Ministry The Directorate of Programming and Economic Affairs (DPAE), which is in charge of the coordination, formulation, and evaluation of agricultural policies, particularly from an economic and social standpoint, also has the specific responsibility of producing and disseminating information to public and private operators in the sector. The function has gained in importance with the new eco- nomic liberalization policy and the strengthening of the role of the market in the sector. The six Central Technical Directorates directly involved in agricul- tural development determine and supervise the implementation of sector programs and actions, analyze the results, and evaluate their efficiency. These are: - The Directorate of Plant Production * The Livestock Directorate * Department of Rural Engineering * The Directorate of Research and Development Education * The Directorate for Plant Protection Technical Control and Fraud Control * Water and Forest Management and Soil Conservation (Ministry in Charge of Water and Forestry Resources). 106 . Abdelaziz Belouafi The nine Regional Offices for Agricultural Development (ORMVA) participate actively in the determination of their objectives, pro- grams, and budgets and contribute to the production of basic information on the sector. The 41 Provincial Directorates for Agriculture (DPA), should play an increasing role in the preparation of program proposals as a result of the decentralization policy. The DPAs, like the ORMVA, have an information production role in the sector. The Regional Directorate of Water and Forestry Resources (DREF) has the status of a DPA. The last three groups of regional bodies constitute the major source of sector data for the central departments and local authorities, and for their own use in the exercise of their responsibilities. Other actors and users The monitoring system will also involve, at various levels, state companies and other public agencies and parastatals. Over time, it should adjust to the increasingly preponderant roles of associations and private individuals. Data sources SSEDA culls its data primarily from the information generated by the Ministry's departments in the monitoring of such implementation activities as the following: * Technical monitoring: Monitoring of the cropping season and markets * Budget monitoring: Commitments, payment of credits * Management monitoring: Personnel, stock of equipment, supplies. The Agricultural Development M&E System in Morocco: 107 Objectives, Content, and Implementation Status The use of this source will preempt the construction of a new information center, and thus focus on facilities for collecting existing data. Of these monitoring systems, the technical monitoring system appears to be the closest to SSEDA, given the common themes and basic information both handle. The differences between the two systems reside in: * The level of detail in the information. SSEDA's work is more comprehensive than technical monitoring. * The time-frame. Technical monitoring is carried out through- out the season. SSEDA is annual or multiannual in scope. These differences arise because the results of technical monitoring of the operations carried out during the season can be put to immediate use, while SSEDA is designed for global assessment, which generally requires a longer time-frame. Monitoring and evaluation Monitoring Monitoring with the SSEDA system is an evaluation of agricultural sector trends in light of development orientations and objectives. It helps to guide (or readjust) policies and actions undertaken as these relate to the trends observed. The system accomplishes the following: - It monitors the implementation of development programs holistically by monitoring the components and taking account of their objectives, the resources assigned to them, and their implementation schedules. * It monitors the impact of external factors (such as weather, prices, incentives, and monetary policies) that have enhanced or impeded the achievement of the objectives set to advance development. 108 Abdelaziz Belouafi * It monitors sector trends (such as agricultural GDP, production, and income). SSEDA should not, however, relegate immediate short-term concerns to the background. The minister, or any central or regional director, can expect to have informed and "regular status reports" throughout the year without having to wait for the eventual findings and proposals of an annual SSEDA monitoring report. These status reports will cover: * The management of the farming season * The status of temporary difficulties (the supply of inputs) * Progress reports on major programs and budget implementation * Episodic phenomena (drought, flood). Evaluation: There are two distinct types of evaluation. An evaluation in terms of references and objectives: This is a periodic evaluation of sector trends, mostly in terms of production, as compared with the potential of resources and with the objectives set for a given period, or with a predetermined reference period (multiyear averages, average of several years, or record production year). An analytical evaluation: This evaluation assesses how develop- ment policy, projects, programs, or specific actions, generally geared toward national strategic orientations, affect the sector. These would include: * The improvement of farmers' incomes and living conditions in rural areas * Employment promotion in the rural areas * Adding value to agricultural production (marketing, processing) The Agricultural Development M&E System in Morocco: 109 Objectives, Content, and Implementation Status The improvement of the agricultural balance (overall, market- ing, nutritional). There are numerous areas of evaluation. Basic economic reforms: The information provided by SSEDA can be used positively to contribute to the analysis of a policy. An example of this are the reforms undertaken under the liberalization of the national economy, where evaluation could cover: * Price liberalization of production inputs and incomes * The State divestiture from trade and services activities * The increase in price of irrigation water in the ORMVA * The liberalization of external trade. Evaluation of institutional measures: SSEDA could help in measuring the efficiency levels of the development policy tools used. In this respect, the quantitative or qualitative impact on production and agricultural revenue; major institutional, financial, or economic measures taken by the government in taxation; granting or removal of subsidies; and reduction or removal of custom duties, can be studied. Evaluation of the domestic economy and its other branches: Under policy analysis, information from SSEDA could be used to assess the impact of the agricultural sector on the other sectors of the national economy in the agricultural production chain. Specific evaluations: In addition to the overall evaluation of agricultural development, the Ministry could be called upon to conduct specific evaluations to measure the results of its action in a particular area. Such evaluations could, for example, be conducted on: The impact of integrated development projects or the develop- ment of farmlands 110 Abdelaziz Belouafi * The impact of sector operations (such as improvement in olive production) * The impact of actions or of methods of agricultural extension * The impact on environmental protection and natural resources * The impact of the cost of electrical energy on the development of the sector * The impact on the trend of production and revenues, of all irrigation systems, seed varieties, and planting technologies established by the government or by the farmers themselves. These types of analytical evaluations are the future direction of SSEDA, and would undoubtedly call for the development of capaci- ties within the Division for Monitoring and Evaluation to analyze the agricultural policies and their impact on the sector. Current achievements in implementation of the system The project was implemented in two phases. The first focused on the technical design of the project (1991-93), and the second on its effective establishment (1994-97). Technical design of the system: The first concrete aspect of the project started with the creation of a working unit, comprising senior staff of the Directorate of Programming and Economic Affairs and national and international experts. This team worked on the identification of the basic elements of project design (monitor- ing objectives, principles and design, sensitization of partners) and on defining a pilot program for establishing the new system. Establishment of the system: The system was established in four cycles; the themes and agencies covered were integrated gradually. Currently, 16 of 30 themes have been established in 21 regional agencies. The Agricultural Development M&E System in Morocco: 111 Objectives, Content, and Implementation Status The achievements to date are: * M&E activities within the central and regional agencies of the Ministry have been institutionalized. * A system, specialized management team has been created at the central and regional levels of the Ministry. * Methodological tools (databases, scorecards, M&E reports, user manuals) have been established and adapted. * The system has been effectively established in all the areas under nine Regional Offices for Agricultural Development, eight Provincial Directorates of Agriculture, and four Regional Directorates of Water and Forests, which account for nearly 60 percent of domestic agricultural output and more than 75 percent of overall agricultural exports. * A program has been established to extend the system to the central and regional agencies of the Ministry. * An action plan has been prepared to improve the systerm's environment and to make it fully operational in the future (continuing education program, general computer scheme, and systems improvement mechanisms for data collection). Institutionalization of M&E: Institutionalization of M&E within the Ministry preceded the establishment of the system. It became effective in July 1993 with the creation of the Division for Monitoring and Evaluation within the DPAE; M&E services at the six Central Technical Directorates of the Ministry, where some 80 percent of the agricultural development information is found; and M&E offices at the 41 Provincial Directorates of Agriculture (DPA), about 9 Regional Offices for Agricultural Development (ORMVA), and some 9 Regional Directorates of Water and Forestry Resources (DREF), created in 1996. Moreover, these agencies have been provided with the human and material resources to carry out their assignments (senior staff, technicians, and operating budget). Effective establishment of SSEDA: The effective establishment of the system, which commenced in 1994, was based on three main principles: 112 Abdelaziz Belouafi * Gradual establishment of themes and gradual integration of new agencies into the system * The involvement of users (Central Directorates, DPA, ORMVA, and DREF) in the process of establishing SSEDA tools * The establishment of themes that did not need new facilities for data collection. So far, in 21 regional agencies, 16 of the systen's 30 themes have been established: * Plant production * Weather conditions * Farm inputs and plant protection * Availability and distribution of production factors * Extension technologies and professional organization * Prices and production outlets * Animal production * Herding and animal feed * Genetic improvement of animal species * Animal health * Hydro-agricultural development - Development of farmlands - Forestry production - Development of grazing lands * Improvement of productive forests * Soil conservation and land protection. The 16 themes established at these agencies cover most of their activities oriented toward agricultural development. The establishment of the remaining 14 themes is, in part, conditional on the design of a new data collection mechanism to collect the information found to be necessary during the SSEDA session, and in part on the data analysis of the recently undertaken General Agricul- tural Census to gather information on the structure of the sector (land tenure laws, farm organization). The Agricultural Development M&E System in Morocco: 113 Objectives, Content, and Implementation Status The future program provides for the extension of the system themes and geographic coverage (the 23 DPA and the remaining 5 DREF, as well as the 6 Central Directorates concerned). The methodological tools of the system (SSEDA products) include the items listed below. Data bases: After establishing the list of the systen's basic informa- tion, corresponding data over a ten-season period were collected at DPA, ORMVA, and DREE Once the data collection was completed, the databases and the management programs were designed and installed at the agencies covered. These computer programs will enable on-line consultation of databases, as well as automatic printout of the indicators. Scorecards: The indicators are primarily targeted at directors. Three types of scorecards, specific to DPA, ORMVA, and DREF, were designed and finalized in conjunction with the M&E services and offices of the Central Technical Directorates and regional agencies at a workshop organized for that purpose. SSEDA's annual reports: SSEDA's annual reports use the scorecards as analysis and interpretation support. The annual reports of the M&E of agricultural development are mostly for the technical service chiefs and senior staff members. These reports, initially produced by the Monitoring and Evaluation Division, have served as models for all the central and regional agencies of the Ministry. SSEDA manual: The SSEDA manual is designed to serve as a system guide for the M&E agencies. It makes the management of the system dependable for the purposes of service continuity. The manual is evolving, and will be improved as the system develops. 114 Abdelaziz Belouafi Other activities: Concurrent with the establishment of SSEDA, a number of activities were conducted to improve the future environ- ment of the system. These activities are listed below. The installation of a pilot monitoring module on the theme of "agro-industry": The monitoring activity of the agricultural sector, previously undertaken by the Ministry, focused a lot more on the up- stream aspects (planting technologies, output) than on the down- stream aspects of production (marketing, agro-industry). Accord- ingly, a mechanism for monitoring and evaluating agro-industry was designed and tested in the areas of activity of the two pilot ORMVAs. This has resulted in the constitution of a database for the main sub- sectors of agro-industrial production in the areas of activity of the two ORMVAs. The module is ready for extension nationwide . Establishment of a subsystem for M&E of the technical and economic performance of farms: Several SSEDA themes require the design of new mechanisms for technical and economic informa- tion collection, especially regular farm surveys to monitor invest- ments, crop technologies and output, and farmers' incomes. This information collection mechanism, which should soon fill the information gap for several of the SSEDA themes, has already been designed and installed in all the areas of activity of ORMVA and will soon be extended to areas covered by DPA. Study on the extension and refinement of SSEDA: This aims to prepare the extension of the system to all the central and regional agencies of the Ministry and to improve the quality of the data collected and the computer environment for processing and dis- seminating them. The activities of the study cover: * The establishment of a scheme for future extension of the geographic areas and themes * The fine-tuning of procedures and mechanisms for collecting new data and improving the quality of existing information The Agricultural Development M&E System in Morocco: 115 Objectives, Content, and Implementation Status * The evaluation of training needs of the staff in charge of the activities at the central and regional levels and the formulation of an appropriate training program * The identification of equipment and computer network and new software necessary for data management, analysis, and the transmission of SSEDA regional and central databank information. After the current installation has been completed, some fine-tuning will be necessary to assess the achievements in detail and to deter- mine where extension and improvement of the system are needed to make the system filly operational. A seminar for the presentation of results will be organized at the beginning of December 1998, in the presence of the minister and the central and regional directors of the department. The seminar will provide an opportunity to assess the level of agricultural development in the areas covered by the system. 116 Abdelaziz Belouafi 11 M&E Capacity Building in Uganda, with a Focus on Public Expenditure Michael Wamibu Monitoring implies an ongoing activity with respect to predeter- mined outputs, inputs, and a time frame, while evaluation is more of a one-off exercise. Monitoring helps to assess whether the program being implemented is on schedule, and if there are any problems that can be addressed immediately. Evaluation is carried out at the end of the program. It gives an opportunity to assess whether the objectives of the program were achieved. Monitoring and evaluation (M&E) involves three distinct activities: collecting information, analyzing information, and acting on information. M&E is an essential part of public expenditure management. It guides fiiancial managers and planners to assess whether resources have been efficiently used and services have been effectively delivered to the intended recipients. In Uganda, M&E activities include financial and economic M&E. Financial M&E is concerned with whether the funds released for a program have been used for that purpose; economic monitoring applies to the quality of the services delivered. This aspect of M&E is carried out by the Ministry of Finance, Planning, and Economic Development, line ministries, the President's Office, Office of the Prime Minister, and local governments. Both financial and economic M&E are inadequate in Uganda. This is because little effort is given to strengthening M&E because of the lack of adequate manpower and logistical support. Furthermore, it was not clear which ministry would spearhead M&E until recently, when the Budget Policy and Evaluation Department was created and charged with the responsibility of monitoring and evaluating public expenditure. 117 Problems of M&E of public expenditures Given that both financial and economic monitoring are weak, there is little documentation about them. However, the following problems have been noted in relation to expenditure M&E. Lack of accurate data: In central and local governments, financial information on public expenditure is inaccurate. This is partly because of the manual recording systems in place, which make it difficult to monitor the use of funds disbursed by government. Lack of proper financial systems in place: The financial systems in place for accountability and tracking the use of public funds in ministries and districts are still inadequate. Capacities for imple- mentation and enforcement are weak because skills are in short supply and staff are poorly motivated. It is difficult to establish whether the funds released by government to various programs deliver the services required. Lack of effective financial M&E systems: Because of ineffective financial M&E, some accounting officers in the ministries and districts find themselves overcommitting government, leading to the accumulation of domestic arrears or necessitating demands for supplementary expenditures. Since government runs its budget on a cash basis, expenditure commitment should be in line with monthly releases. This will only be possible if the M&E system (accounting system) is well maintained. Lack of facilities: Because of the lack of equipment, facilities, and logistical support, expenditure officers within the Ministry of Finance, Planning, and Economic Development are unable to effectively monitor programs against intended objectives. Very few field visits are undertaken to verify outputs in services provided in relation to expenditures. 118 Michael Wamibu Lack of coordination: Because there is no institutional framework for coordinating expenditure M&E activities within the country, it is not possible to understand what activities are taking place in various programs. Government tried to address this issue in 1995.by creating the Monitoring and Evaluation Department within the former Ministry of Planning and Economic Development, but it could not perform its monitoring and evaluation functions because of a lack of manpower, equipment, and financial resources. What efforts are in place to strengthen M&E of public expenditure? Government has made efforts to address public expenditure M&E issues. The initiative encountered some problems. The report on "Project Monitoring and Evaluation in Uganda" (November 1993 by Mokoro) shows that in the early 1990s, govern- ment put in place a Project Monitoring and Evaluation Unit to: * Develop an information base for projects. - Incorporate lessons of the past into the future design of the projects. - Establish a financial monitoring system for projects funded through World Bank credits. - Institutionalize project M&E by developing a unit with national staff to ensure its continuity. Unfortunately, the unit did not achieve its objective for the following reasons: * There was no effective framework for project design and management. * It was not clear what M&E could achieve. * There was duplication of roles between the financiers and the officials of the unit. * It was unclear who had the responsibility of directing the unit. M&E Capacity Building in Uganda, with a Focus on Public Expenditure 119 The main lessons learned for M&E are that in designing monitoring systems, it is necessary to take account of the multiplicity of infor- mation levels and requirements. Public expenditure is divided into two areas: recurrent expenditure and development expenditure. The government is trying to integrate them. As far as the recurrent budget is concerned, after the release of funds by the Ministry of Finance, Planning, and Economic Development, there is no monitoring to ensure that the services/ activities rendered are in line with the resources provided. At the end of the financial year, the accounting officers for the ministries and districts submit their returns to the Treasury Office of Accounts without verifying whether the funds reached the targeted programs. The low morale of the officers concerned, lack of transparency by some accounting officers, and shortage of skilled manpower appear to be the central causes of this practice. Regarding the development budget, M&E is done independently by the program managers of the line ministries and the districts where the project is based. M&E is often carried out by donors for the programs they support. However, the program managers submit their returns and work plans quarterly to the Ministry of Finance, Planning, and Economic Development before the release of counter- part funds. The level of M&E of the development budget is greater than that of the recurrent budget. In the development budget, there is an attempt to carry out economic monitoring by the program managers. This is because 90 percent of the development budget is donor-funded, implying that donors are prominent in the M&E of the programs. Government established a Parastatal Monitoring Unit to monitor the financial performance of public enterprises. As a result, substantial progress has been made in reducing the domestic debt. The Auditor General's Office is responsible for the M&E of public expenditures within the ministries and public enterprises. 120 Michael Wamibu In recognition of the inadequate M&E within the country, a Depart- ment of Budget Policy and Evaluation within the Ministry of Finance, Planning, and Economic Development was created this year to address budget policy and evaluation issues. In addition, two sectoral departments have been created to improve expenditure monitoring. This will be achieved in liaison with other departments (Treasury Office of Accounts, Inspectorate, and district planning units). Measures to address the problems Government is moving to strengthen the role of M&E by defining and establishing standards for the quality of public service delivery. According to the Background to the Budget 1998/99, (Ministry of Finance, Planning, and Economic Development), government acknowledges that it is not clear on the extent to which past budget out-turns have been in accordance with strategic objectives. It is also not clear on the extent to which spending has been prioritized according to where it is the most effective. There is currently no information to support decisions on spending options-for ex- ample, whether spending a million shillings on public health promo- tion would have more effect on health outcomes than spending the same sum to supply safe water. This is because there have been no performance indicators developed with regard to the inputs and outputs of various programs. The Background to the Budget further notes that it is not clear how well past budgets have scored in delivering value for money. Track- ing studies of spending on education, health, and agriculture show substantial diversion of funds: often only one-third of the funds actually benefits the intended recipients. Initiatives are under way to improve the value for money. These include: increased emphasis on transparency and accountability; increased involvement of private sector expertise and incentive M&E Capacity Building in Uganda, with a Focus on Public Expenditure 121 structures; and introduction of a results-oriented management culture and output-oriented budgeting procedures. Government plans to carry out a National Service Delivery Survey (NSDS) to generate data that can be used for planning both at the national and the local levels for various service sectors such as primary education, primary health care, safe water, infrastructure, and modernization of agriculture. The survey will provide informa- tion to public service managers at the district and national levels on standards for delivery of public services at all levels of government. The standards will provide a benchmark for M&E of these services. These standards will be reviewed annually to ensure that they support the expected improvement in the quality of services. This information will help reorient management to focus on outputs and results, and provide an objective basis for planning and formula- tion of programs for public services. The survey will provide the quantitative data necessary for the performance indicators, which are essential to the introduction of results-oriented management. NSDS data will highlight areas of substandard performance and will assist managers to target efforts and resources to address specific problems. With the implementation of results-oriented manage- ment, a culture focusing on the quality of delivered services will be built into public agencies. From the findings of the NSDS, the Budget Policy and Evaluation Department will work out a comprehensive framework that will permit expenditure officers and line officers to monitor and evaluate program targets against outputs in various ministries. Government has also created a Uganda Bureau of Statistics to strengthen institutional capacity building within the country. The Bureau will improve on the management of statistics, monitoring, and evaluation within central and local governments. These measures are expected to improve fiscal management within the country. Output-oriented budgeting will help in the evaluation of 122 Michael Wamibu programs and will ensure a balanced focus between resource allocations and service delivery outputs. Conclusions There is a need to strengthen institutional capacity building within the country to put into place effective M&E systems. M&E is a crucial element in public expenditure review. The Ministry of Finance, Planning, and Economic Development has made an effort toward carrying out financial monitoring, but little consideration has been given to the quality of outputs delivered. The system for monitoring outputs and input use should be based on the monitoring of quantitative outputs and feedback from clients on the perceived quality of service. Economic M&E should be done by the districts, sector ministries, and the Ministry of Finance, Planning, and Economic Development. Monitoring of programs should be accompanied by systematic and timely dissemination of informa- tion. Without financial M&E, it is difficult to have effective eco- nomic M&E. Capacity should be built by training staff, providing the necessary logistics, and putting in place proper accounting and reporting procedures. M&E Capacity Building in Uganda, with a Focus on Public Expenditure 123 12 The M&E System and Responsibilities in the Republic of Guinea Mamadou Bah The government, in its concern about long-term development, drew up a development strategy known as "Guinea Vision 2000.' Among the objectives of the strategy are the correction of past errors and the establishment of a coherent development mechanism, with particular emphasis on the strengthening of institutional, infrastructural, human, and financial resources and the creation of a growth-centered private sector. For speedy attainment of these objectives, the government, in collaboration with the World Bank, introduced a management and planning approach for development projects-the medium-term expenditures framework (MTEF). The MTEF, which was initiated in 1997 in four pilot sectors- education, health, rural development (agriculture, fisheries and livestock), and road infrastructures-applied under the 1997 budget. It included the following phases: * Determination of medium-term sector strategic objectives, together with verifiable indicators * Translation of the objectives into priority sectoral activity programs, incorporating operational expenses and investment expenditures * Establishment of the program costs * Determination of stringent lump-sum amounts for the four sectors to reflect the priorities * Distribution of the annual budget according to the priority programs. 125 M&E background To deal with the many problems encountered in implementing public investment programs through projects, the government saw the need to gradually institutionalize a government-level M&E system. This was strengthened during 1997 with the setting-up of a strategic, office-based M&E service in most technical departments. The government sought to allow all stakeholders in public invest- ment policy decisions to be well informed. Objectives of the M&E service In the long-term, the M&E service aims to improve the formulation and implementation of new development projects by taking account of experience acquired in the execution of similar projects. To this end, the service envisages the availability of a body of reliable information for evaluating progress toward the predetermined objectives indicated in the project documents. The information drawn from appraisal reports yields data on the following: * Relevance of the methods used to identify, prepare, and moni- tor/evaluate the projects * Analysis of the results (differences compared with objectives) * Financial and economic performance of projects * Performance of stakeholders, including government and development partners * The project's impact on the socioeconomic and natural envi- ronments * The possibility of the project attaining the development objective. Coordination of M&E activities at all levels, and collaboration with all the agencies involved in producing information at the central level 126 Mamadou Bah with donor and government support, makes for an established and coherent M&E system that is operating in an appropriate working environment. Expected results of the SSE (M&E service) Improvement of development management skills at all levels is anticipated through the rationalization of each new public invest- ment program and project (with due account taken of available and reliable information drawn from experience). Upgrading of national expertise will have a positive effect on the quality of project implementation results through effective M&E activity (field visits) and holding training programs for the stake- holders in the system. M&E approaches As part of the new development strategy, government gradually established a M&E system at all levels. At the central level. The central Ministerial Department of Plan- ning and Cooperation, through its national directorates, is primarily responsible for the design, formulation, and application of govern- ment policy in international development cooperation plans and programs and statistical studies. The Department is responsible for, among other things: * Economic, social, and cultural development programs * Formulation and monitoring of multiannual public investment programs and participation in project preparation * Preparation and monitoring of international cooperation programs and coordination of technical assistance and the like. The M&E System and Responsibilities 127 in the Republic of Guinea The Ministry of the Economy and Finance, through its national directorates, is responsible for the design, implementation, and monitoring of government policy in the economic, financial, and budgetary fields. The President's Office has a Large-Scale Projects and Administration Unit (Administration et Contr6le des Grands Projets, ACGP) that is responsible for the study and implementation of large-scale projects. At the sectoral level. In most of the ministerial departments, particularly those responsible for MTEF (CDMT), offices for plan- ning and development strategy have been institutionalized, with such tasks as conducting M&E of public investment programs and projects. In the exercise of these activities, the agencies concerned are responsible for: * M&E of projects under their administrative jurisdiction, to provide the supervisory ministry with a global view of the status of projects and to identify new needs in the sector through impact studies * In conjunction with the technical directorates, determine the key indicators to enable M&E during project preparation * Initiate and use impact studies to enhance the execution of new projects * Log the progress of all projects using information culled from various M&E reports to identify conducive or constraining factors. At the decentralized level. The Ministry of the Interior and Decentralization is responsible for piloting the execution of local M&E services, because this activity is part of the regional and prefectoral services. These decentralized services, representing government, participate with other development agents in imple- menting public investment projects under their administrative jurisdiction. 128 Mamadou Bah The Regional Development Directorate is an agency based in the provincial capital. It is composed of the following regional inspectorates: Plans and Statistics, Economy and Finance, Agricul- ture, Education, Health, and Social Affairs. At the prefectoral level, the corresponding agencies have been established and are under a secretary-general responsible for decentralization, while the regional inspectorates come under a governor with the rank of a minister. At the head of the regional inspectorates are the regional inspectors, responsible for planning, programming, coordination, and monitor- ing investment activities on the ground with their corresponding services at the prefectoral level. The inspectors also initiate dissemi- nation of information and sponsor training sessions. Conclusions Generally, M&E is lacking at all levels. Some of the reasons for this lack are the absence of relevant procedures and the appropriate agencies, inadequate skills, and problems connected with the circulation of information. Special mention should be made of the following counterpoints: - There is a lack of coordination among the agencies responsible for M&E at the intersectoral level, with each sector evolving in isolation and using its own methods. - Allocations of human, material, and financial resources are insufficient in both quality and quantity. * Guides, manuals, and regulations for monitoring and evaluation are absent. These must be complied with in all public invest- ment projects. * There is marked dependence of M&E agencies, which restricts or impedes their functioning. The M&E System and Responsibilities 129 in the Republic of Guinea * The central M&E mechanism is conspicuously ineffective, with the resultant poor quality of programming, public investment programs, and project execution sessions. * There is a virtual absence of information on projects entirely financed by donors at the central and some departmental levels, with the attendant consequences. 130 Mamadou Bah 13 The Link Between M&E and Decisionmaking in Zimbabwe Michael S. Sibanda M&E is fairly new in Zimbabwe, and most of it has been donor- driven. But since the late 1980s, the government, through its National Economic Planning Commission, has been the main force behind the development of national evaluation capacity. The implementa- tion of the pilot project on M&E and its proposed successor demon- strates the strong realization that when used effectively, evaluation capacity can: * Influence policy analysis and formulation * Improve resource allocation and the budgetary process * Improve investment programs and projects * Examine fundamental missions. Given the benefits that can accrue to a nation through M&E, the issue of applying it to decisionmaking becomes a natural outcome of successful M&E. However, it has been observed over the years that line ministries undertook M&E not to improve efficiency and effectiveness, but to meet donor accountability requirements. In some cases, useful reports have been filed away because the process has been externally driven and there has been no deliberate attempt to adopt it as part of the management process. In the absence of a centrally driven system, line ministries could not link M&E to the national resource allocation process, and they did not feel compelled to submit any M&E information to NEPC. It was also observed that some stakeholders or actors in M&E were not regularly coordinating or submitting key information to the central M&E agency, NEPC, to feed into macro policy and plan formation. 131 M&E stakeholders • The Comptroller and Auditor General (CAG), who audits expenditures as voted by Parliament. The CAG's office also carries out value-for-money audits, which emphasize efficiency, effectiveness, and the relationship between costs and the actual results of projects. The results of the value-for-money audits are presented to the Parliamentary Committee on Public Accounts. * The Monitoring and Implementation Division (MID) in the Office of the President and Cabinet. * The Public Service Commission (PSC), which is responsible for the Public Service Reform Program. The program covers performance management, decentralization, subcontracting, commercialization, and privatization. * The Ministry of Finance, which needs M&E information to supervise its recurrent budget, loan, and aid coordination responsibilities. There is a need to coordinate and synchronize all outputs from the various players and to make M&E the strong decisionmaking tool it is meant to be. M&E and decisionmaking: Experiences In the absence of a culture supportive of M&E, the possibility of building it into decisionmaking was remote. However, there is now a growing awareness and realization of the significance of M&E in project and program implementation and in policy formulation. NEPC has been involved in a number of evaluations; some of the resulting recommendations were adopted as policy or were used to redirect the the following projects or programs. Coordinated agricultural and rural development (CARD): In 1993 an evaluation of this program was carried out, and it identified weaknesses in the institutions, sustainability of measures, and 132 Michael S. Sibanda operational integration into the planning structures. The evaluation team recommended that the program be reformulated into Inte- grated Rural Development (IRDEP). This would entail widening the program scope to include other institutions and components of rural development. The focus of the program would be on the capacity building of existing institutions, rather than the actual implementa- tion of projects. It was further recommended that in line with its widened responsibility, the program be transferred to the (then) Ministry of Local Government, Rural and Urban Development, which had the mandate to coordinate government institutions involved in rural development. Urban housing program: The initial threshold of this World Bank-financed program was an income of Z$400 per month, but after the evaluation this was raised to Z$1,200 because it had been observed that the poor could not afford it, and thus ended up selling the stands to the well-off. Irrigation and water development in communal/resettlement areas: These are smallholder irrigation schemes that bring direct benefit to the community, especially in arid and semi-arid regions where erratic rainfall is not conducive to agricultural development. Some of the recommendations emanating from M&E were that procurement needed to be decentralized to provinces and that farmers needed to contribute toward the cost of water. Communal area development programs: These are projects aimed at improving the welfare of smallholder farmers through increased production and marketing of horticultural crops and milk (Manicaland Small-holder Coffee; Fruit and Irrigated Food Crops; Mashonaland East Fruit and Vegetable Development Project; Dairy Intensive Resettlement Scheme). The Agricultural and Rural Development Authority (ARDA) was asked to speed the process of handing over management to farmer organizations. It has since done so. Road sector study and recommendations: Following a number of project evaluations on the road sector, several policy changes are The Link Between M&E and Decisionmaking in Zimbabwe 133 under consideration. These include the proposed creation of a Road Fund to assist in the provision of regular maintenance and the construction of new roads. Once M&E information is seen as credible, timely, and objective, it is used in decisionmaking. The process has not, however, been effectively institutionalized or enforced. Most often it has been left to the discretion of the implementing ministry, particularly in the absence of a strong focal M&E unit at the national planning level. Most of the successful evaluation studies with recommendations that were adopted as policy have been tripartite, involving NEPC, the line ministry, and donors. The involvement of these implementers has helped it to be seen as a learning process, not a policing exercise by outsiders. An attempted evaluation on the Family Health Program, which lacked constructive team spirit (and was viewed as investiga- tive by the implementers), met with serious resistance and was not taken to its logical conclusion. Much of the evaluation effort to date has been sporadic and lacking in national drive. In some cases ministries would carry out evalua- tions without notifying NEPC, solely to meet donor conditions. However, where NEPC has been involved, a more decision-oriented approach has been taken. But NEPC has now adopted a proactive approach and has designed an M&E system in which it directs the whole process, but the primary responsibility remains with the ministry. NEPC has resolved to strengthen the link between the Public Sector Investment Program and M&E. The process started in the 1999 budget preparations, in which ministries were advised that allocations would depend on their compliance with M&E requests. For the first time, ministries were made to realize that M&E was a tool for effective monitoring of the Public Sector Investment Program (PSIP), and that results from M&E would determine resource allocations. The new drive will use M&E information at all stages of the project cycle. 134 Michael S. Sibanda Utilization of M&E information in the project cycle: The overriding principle in all project cycle stages will be the need to draw on best practices based on experience and to ensure that lessons learned in the past are built in to improve on present practices. Project identification: Programs and projects are identified by the line ministries using the departmental, provincial, and district structures. Lessons drawn from previous studies are applied in determining the desirability and possible degree of success of the proposed project or program. Project formulation: The key aspects to be considered include: background and justification, project objectives, project description, resource requirements, and implementation plan. Information from previous projects can be used to select the best implementing methods and the most appropriate implementing agency. Project appraisal: The process will lean heavily on experience, with a view to building on best practices. Submission of a project to the NEPC must be preceded by an appraisal done by the ministry. Appraisal involves the examination of all aspects of the project, including its identification and formulation, forecast inputs and activities, expected outputs, institutional framework, and budget. The process entails detailed examination of the technical, economic, financial, environmental, social, and gender dimensions. Results from previous programs and projects must influence decisions if M&E is to take its rightful role in decisionmaking. Project approval: Projects are approved at both the line ministry level and by NEPC. Thereafter, they are funded under the ministry's three-year rolling budget, or they are sent the Ministry of Finance to solicit funding by donors. At the approval stage, NEPC will ensure that SMART objectives are outlined, indicators for full subsequent M&E are factored in, and previously requested information is submitted. The M&E database will also make it possible for the The Link Between M&E and Decisionmaking in Zimbabwe 135 NEPC to identify linkages with programs and projects being imple- mented by other ministries, determine if the timing is feasible and realistic, and avoid overlap and duplication. While a number of pilot projects were evaluated before replication, this has not been policy. It has since been adopted as policy, and no pilot projects will proceed to replication without having passed through an evaluation study. Resource mobilization: This will entail the possibility of funding of programs and projects from the government capital budget, donors, or own resources in the case of parastatals. Following successful implementation of projects with particular donors, the government will be in a good position to select appropriate finan- ciers or donors. Project implementation: Approval by Parliament authorizes ministries to make commitments against the approved budget, and ministries are fully responsible for all aspects of implementation of the projects and programs. M&E: This is the last stage in the project cycle. It provides feedback into policy plans and projects and program formulation. Formation of program coordination management units: There are a number of program coordination and management units, such as those for urban housing programs and for the Rural Water and Sanitation program. These units promote coordination, and they also ensure adherence to program objectives. Conclusions M&E is set to take its rightful role in decisionmaking. The gaps between decisionmaking and M&E were precipitated by the dearth of M&E capacity within the public sector as a whole. In the absence of such capacity, the benefits could not be articulated, and the cascading of the concepts could not be effectively activated. 136 Michael S. Sibanda M&E is the last aspect of a project and program cycle and should act as the correction model, not only for current projects, but also for future, similar projects as lessons learned are factored into the development and planning process. The link between decisionmaking and M&E cannot be overemphasized. That link is the reason that such a function should be established and continued, particularly in times of scarce financial resources. The Link Between M&E and Decisionmaking in Zimbabwe 137 14 Strengthening Evaluation Capacity in the Latin American and Caribbean Region Jean S. Quesnel Strategic priority of in-country activities to strengthen evaluation capacity During the Eighth General Increase of Resources for the Inter- American Development Bank, the Governors of the Bank identified three priority areas for Bank involvement in the region: (1) poverty reduction and social equity, (2) modernization and integration, and (3) the environment. The increased emphasis given to these areas requires a greater understanding of the effectiveness and efficiency needed to achieve the intended results. Within this context, IDB-8 commissions the Bank's Evaluation Office (EVO) to support in-country capacity building and facilitate cooperation in evaluation activities with other development agencies. EVO's terms of reference require it to conduct evaluations according to the annual work plan approved by the Board of Executive Directors and the president; oversee the performance of the Bank's evaluation system (BES); and provide technical support in evaluation matters in borrowing countries, including strengthening evaluation capacity at the various management levels. Renewed evaluation approach The Bank's new evaluation mandate highlights the benefits of having borrowers as direct stakeholders of Bank evaluation activities. This Note: This paper isbased on aWorkingPaperbyFrancisco Guzman. 139 reflects the concept in international development assistance that closer coordination in the identification of lessons from the execu- tion of projects is needed to reach or increase the intended effects of Bank lending. This requires the acceptance of evaluation as an integral part of any management system, applicable to any level of corporate or government organization (strategic, policy, program, and project). It is an ongoing, participatory, and interactive process that encourages institutional learning and can enhance performance. It can become an effective source of current systemic performance feedback, useful for future operations and for optimizing operations. Conceptual framework for strengthening evaluation capacity EVO's mandated activities aim at providing evaluation-related technical support to the Bank's Country Offices and responding to borrowers' requests for assistance in strengthening their national evaluation capacities. EVO's activities include dissemination of lessons learned and best practices in evaluation administration, organization, and approaches; training in evaluation and project management; and coordinating donor activities to support evalua- tion-strengthening activities at the country level. Activities to strengthen evaluation capacity are based on three principles: * What gets measured gets done. Performance measurement mechanisms are needed to measure results, as well as to strengthen performance information. * An evaluation mechanism, when used as a tool for decisionmaking, and control systems can be the fulcrum of a check and balance system. * The strengthening of evaluation and control systems is a necessary step in fostering accountability and transparency in public administration. 140 Jean S. Quesnel Evaluation: a tool for reform and modernization of public administration A key feature of good governance is the ability of a government to learn from experience and apply lessons learned to the development of new policies, programs, and projects. Democratic and participa- tory governments, decentralizing economic management, have increased accountability for their actions and provided more transparent information about their decisionmaking process. Because of this, the need for performance assessment and the use of lessons of the past in the formulation of new policies, programs, and projects is becoming increasingly important. At the central government levels, accountability takes different forms. The traditional form is hierarchical, based on administrative reporting structures that are ultimately responsible to political levels. This form of accountability has been termed macro level accountabil- ity. It can be reinforced by mechanisms of micro level accountabil- ity, involving decentralization, participation, and competition. The latter two factors enable the public to influence the quality and quantity of a service by articulating views or shifting demands. In most countries, citizens have become more critical of government performance in the management of the economy and the delivery of public services. In countries with advanced evaluation systems- such as Australia, Canada, New Zealand, the United Kingdom, and the United States-emphasis is given to managerial accountability based on the production of outcomes rather than on the use of inputs. The themes of transparency and reliable information pervade good governance and reinforce accountability. Access to reliable informa- tion is vital for the development of relevant policies and programs. It can increase the efficiency and effectiveness of public administra- tion and influence the orientation of markets. Transparency and participation assist governments in implementing market environ- Strengthening Evaluation Capacity in the Latin American and Caribbean Region 141 ment policies by clarifying government policies and programs to all stakeholders. Without transparency, there usually is resistance to change-a characteristic of many civil service reform programs of the past. Removing the walls that bureaucracies build around themselves, transparency implies more open management of government. It is essential in any effort to improve performance accountability. Evaluation can strengthen public sector management, and can support: * Examining fundamental missions: A vital dimension of public sector management is the evaluation of institutions or the government itself; that is, the determination of the rel- evance, performance, and cost of its actions and institutions. * Influencing policy analysis and formulation: Careful analysis of the costs and benefits of existing policies is key to informed policy formulation. Objective evaluation can also give comfort to decisionmakers faced with tough political choices; it can indicate what works and what does not. * Improving resource allocation and budgetary process: It can help monitor how efficiently government revenues are being spent and provide information to serve as the basis for budgetary decisionmaking. It can also provide greater insight on financial auditing. * Improving investment programs and projects: Evaluation of investment projects can help promote a performance- oriented culture within government agencies and increase accountability for results. Building capacity for control systems is essential for stronger checks and balances in government Government requires institutions, standards, and procedures that ensure integrity, transparency, and accountability in public adminis- 142 Jean S. Quesnel tration. In addition to the strengthening of the evaluation function, it is essential that the legislative branches exercise their power over budgetary control and oversight by: (1) modernizing existing control mechanisms, (2) establishing standards for internal control systems that focus on the attainment and quality of results, (3) reviewing existing procurement and regulatory procedures to ensure higher transparency and accountability in the conduct of govern- ment business, and (4) developing a greater focus on performance in the operations of public sector organizations or, as it is often described, introducing a performance culture. Government audits, evaluations, and investigations assess the efficiency, effectiveness, and accountability of government agencies and their programs. These tools for public sector accountability can provide information, unbiased analysis, and recommendations that the organization's customers and stakeholders use to make informed decisions. Governmentwide auditing and evaluation standards on quality control can provide helpful hints for use by the federal or central government, state or provincial authorities, and local control organizations in designing or improving their own internal control systems and ensure consistent quality products that can be relied on by the organization's customers and stakeholders. Today's total quality management environment offers excellent opportunities to reassess and continue to improve the quality control system that helps to provide customers and stakeholders with the service to which they are entitled. Key questions that should be considered in assessing the effectiveness of an organization's quality control systems should seek to find out if the government is: * Doing the right things? * Doing it right? * Getting results? * Achieving consistent quality? Strengthening Evaluation Capacity in the Latin American and Caribbean Region 143 The use of performance measurement to increase results and accountability Continuing pressures for improved accountability and greater value- for-money performance have prompted governments to recognize the need for improved program performance. Like many OECD countries, countries such as Brazil, Colombia, Costa Rica, Chile, Uruguay, and, to some degree, Argentina have legislated or passed administrative mandates for performance measurement of pro- grams, but these efforts are still at an early stage. While many governments are developing formal performance information, not many have implemented this to the point of using it as a regular feature of management and decisionmaking. Performance measurement is still greeted with skepticism by program managers, in some cases because of unsatisfactory early experiences. Initially, performance initiatives were introduced to help reduce budget deficits. In the absence of performance informa- tion, these initiatives were used to determine the relative priority of different areas of expenditure. This often meant the arbitrary cutting of budgets and programs, without taking into account needs and service priorities. This approach was challenged by the value- for-money movement, where the measurement focus moved from inputs toward outcomes. Increased participation of civil society in governance and concern for the clients and the public opened the way to the "service for the people" approach. Performance measure- ment must include the quantity and the quality of outcomes. The broadening of performance measurement systems matches this general pattern; they support more managerial functions (control, monitoring, motivation, evaluation, and accountability) and cover more elements in the production and policy cycle (input, activities, outputs, outcomes, or environment), all at different levels. Managers are the ones who know their programs best and know what the most appropriate measures would be. Managers responsible for timely and accurate data collection and reporting will be the ones with 144 Jean S. Quesnel greatest potential to use the measures to improve program effective- ness and efficiency. Performance measurement must become an integral part of managing government programs, and it ought to be used in as many decisionmaking applications as possible. The main objective of performance measurement is to support better decisionmaking, leading to improved outcomes for the community. Its ultimate objective is to make public service more efficient and effective. Strengthening evaluation capacity in Latin America and the Caribbean Strengthening of evaluation capacity in Latin America and the Caribbean is well under way. While most countries have established investment project databanks to improve monitoring and evaluation of public expenditures, many are increasingly interested in evaluat- ing the results and performance of public sector actions and respon- sibilities. Several countries have placed the evaluation function in central ministries, or even at the level of the presidency or the prime minister. Others, such as Peru, El Salvador, and Nicaragua, are interested in using the Controller General's Office as the external promoter of evaluation at the agency level, while enhancing the controllership function by including performance evaluation as a component of government audits and reviews. Colombia and Costa Rica have developed national systems to evaluate government performance and to monitor the results in priority areas. Although both countries share conceptual frame- works, the origin of their mandates differ substantially. They have developed different operational dynamics. In Colombia, authorities concluded that a public sector evaluation function would become a reality only if a fundamental law required it. Thus, in the 1991 Constitutional Amendment, Colombian authori- ties included an article making public sector evaluation mandatory. Strengthening Evaluation Capacity in the Latin American and Caribbean Region 145 This article charged the National Planning Department with the responsibility for establishing a national evaluation system. In Costa Rica, the president enacted laws requiring the evaluation of government actions through Executive Decree 23323-PLAN, which conferred on the Ministry of Planning (MIDEPLAN) the responsibil- ity for monitoring and evaluating the execution and results of the National Development Plan. In 1995 the president issued a new decree creating the National Evaluation System (SINE), which consists of a network of public sector evaluation units under the coordination of MIDEPLAN. The decree calls for close coordination between the Ministries of Finance and Planning in establishing annual performance goals. In 1996, with technical assistance from EVO, MIDEPLAN developed a government performance measure- ment methodology, and the president of Costa Rica signed perfor- mance contracts with eight sectoral ministries for the achievement of expected results in priority programs. At the end of the same year, a performance review was conducted, and results were made public by the president to the press. In 1997, 18 additional institutions were brought into the Performance Contract Plan, and indicators are induded as part of the 1998-99 budgetary exercise. Chile has a project evaluation system that has been duplicated in various countries of the region. Public sector performance evalua- tion is being developed under the coordination of the Cabinet Secretariat of the Executive. Brazil is well on the way to establishing an evaluation system aimed at measuring the results of investment projects. In most other countries in the region, the culture of evaluation-the systemic use of feedback for the formulation of polices, programs, projects, and the budgetary allocation process-is almost nonexist- ent. However, because of the social, economic, and political realities of the region, more governments are beginning to see the potential benefits of evaluation as a way of improving public sector perfor- mance and are acknowledging it as intrinsic to the process of public sector reform and modernization of the state. 146 Jean S. Quesnel Some constraints in strengthening evaluation capacity In Brazil, Colombia, Costa Rica, and Honduras the strengthening of the evaluation function is a component of modernization of the state projects financed by the Inter-American Development Bank (IADB) and the World Bank. Despite the strong interest in evaluation as a tool for improving public sector performance, the following con- straints to developing and strengthening evaluation activities in the region were identified as common by participants in the seminars and conferences organized by EVO: * Lack of political commitment to evaluation or interest in it at the operational levels of governments' actions * Lack of systematic feedback mechanisms for evaluation findings at the policy, program, and project levels * Lack of ex ante program, and project performance indicators D Too much emphasis on project completion evaluations and little attention given to concurrent or mid-term evaluations * Lack of local expertise in evaluation * Exclusion of local stakeholders in evaluation activities con- ducted by external financial assistance agencies * Politically driven evaluations lack transparency and objectivity * Lack of standards for gathering vital data from the execution of public investments * The not me syndrome; dilution of centers of accountability in public institutions. Experience from within the region shows that to overcome these constraints, countries had to rely on a four-point strategy that aimed to: * Promote the development of an evaluation culture in govern- ment by establishing required performance evaluations for government actions, outcomes, and results * Strengthen expertise and skills in evaluation among public sector adimistrators Strengthening Evaluation Capacity in the Latin American and Caribbean Region 147 * Encourage the use of external and internal resources to support evaluation activities * Involve civil society in evaluation of the quality and impact of public programs and projects. Involving civil society in evaluation of the quality of results of public sector services has had a slow start. It is being tried in a few coun- tries to improve access to health delivery programs. The Bank's experience in evaluation capacity Evaluation capacity strengthening has been an ongoing preoccupa- tion of the Bank in the management of its lending portfolio. At first these efforts were made to ensure that borrowers monitored and evaluated the projects it financed. The Bank's own internal evalua- tion system before 1993 was based on evaluation documents pro- duced by the Operations Department, the borrowers'executing agency, and the Bank's Operations Evaluation Office (OEO). The system was designed to review various aspects of projects at differ- ent periods and to include the borrowers' and the Bank's points of view through its Project Completion Reports (PCRs), Borrowers' Ex Post Evaluation Reports (BEPS), Project Performance Reviews (PPRs), and Sector Summaries. In 1993 guidelines for developing PCRs were enhanced to ensure greater input from borrowers, while the Borrow- ers' Ex Post Evaluation Reports-deemed by borrowers to be cumbersome, costly, and with little value-added-became optional in future loans. Mandate to strengthen evaluation capacity As mandated by the Bank's Eighth General Increase of Resources, the Evaluation Office has collaborated with borrowers, supporting their initiatives in strengthening evaluation capacity. Within this mandate, EVO assumed responsibility for following-up the implementation of the Action Plan of the Regional Seminar on Monitoring and Evalua- tion in Latin America and the Caribbean, which took place in Quito 148 Jean S. Quesnel in November 1993. This seminar was sponsored jointly with the OECD Expert Group on Aid Evaluation. The objective of the Quito Action Plan is to establish or strengthen evaluation capabilities as an integral part of public sector manage- ment for the purpose of improving its efficiency and effectiveness to facilitate transparency and accountability. Its plan was based on a fourfold strategy: * Promote government ownership of the need to establish and strengthen a performance evaluation culture. * Promote and foster a legal and institutional framework for the evaluation function in line with the conditions of each country. * Ensure that the evaluation function is carried out in a participa- tory manner. * Promote the dissemination of evaluation results nationally and at the international level and foster their use. However, different experiences in the use, organization, and adminis- tration of the evaluation function throughout the region make it difficult to have a single scenario for supporting the strengthening of evaluation capacity. The difference in development stages of the evaluation function throughout the region suggests the need to build on opportunities and to accept opportunities that can be justified by the strategy. The challenge for EVO, or any organization tackling the issue of strengthening evaluation capacity on a regional basis, remained the varying levels of need and interest. For this reason, a study sponsored by the Bank and organized by EVO permitted the classification of evaluation capacity in the region into three main categories: 1. Countries where demandfor evaluation exists, but experience and capacity are low (Central American countries). Most countries in Central America fall in this category. There is a great deal of receptivity to training and benchmarking, and governments usually conduct donor-driven evaluations by hiring local and Strengthening Evaluation Capacity in the Latin American and Caribbean Region 149 international consultants. The masters program in evaluation at the University of Costa Rica has been successful. Where government skills are weak, there is an opportunity to provide technical assistance through direct intervention or promoting the formation of alliances with private sector professional groups to provide technical support through consultation and participatory approaches to program and project evaluations. 2. Countries where demand is low and that lack political, legal, and administrative frameworks to support evaluation (most of the Caribbean nations, Venezuela, Paraguay, and Ecuador). To promote demand for evaluation products and show their practical utility, best practices in evaluation from within and outside the country/region have been disseminated. On the supply side, EVO worked through the Caribbean Development Bank to strengthen the evaluation function as a tool for project management. Joint evaluation exercises between multilateral assistance agencies (IADB, CDB, and CABEI) are effective vehicles for developing local skills and increasing ownership of project execution by national project personnel. To ensure the sustainabiity of training in basic evaluation concepts and methodology, EVO supported the development of a certificate in evaluation program within the School of Public Administra- tion of the University of the West Indies. In collaboration with the Instituto de Altos Estudios Nacionales of Ecuador, various evaluation exercises are under way within the masters program for mid-level public and private sector managers. 3. Countries where despite low demand, there is promising capacity and significant expertise. In countries such as Argentina, Mexico, Peru, and Uruguay, where there is sound expertise in evaluation, it has been useful to link evaluation activities to public expenditure reviews and other public sector manage- ment activities. While evaluation practices in the public sector are rare, most of these countries have strong financial control systems. To promote the use of evaluation as a systemic tool for decisionmaking, EVO is working with the Controllers of Peru and Mexico to increase the use of evaluation in value-for- money audits. In Peru, it is likely that the Controller General's 150 Jean S. Quesnel Office will become the external evaluation component of a government evaluation system that will be coordinated at the Ministry of Finance. Lessons of experience in supporting the strengthening of evaluation capacity Over the past five years of collaboration with countries in the Latin American and Caribbean Region, the Evaluation Office has distilled some key lessons learned in its approach to strengthening evaluation capacity. Efforts in evaluation capacity strengthening must serve decisionmakers' needs: Evaluation can best be developed if it is seen by all concerned-national and international stakeholders-as a way to learn and improve the performance of the public sector. The plans for strengthening evaluation capacity must be designed to serve the needs and priorities of national decisionmakers. In Costa Rica, demand was created by the president himself, who sees the benefit of using evaluation results to improve public performance. Again in Costa Rica, coordination of the National Evaluation System (SINE) is the responsibility of the Planning Ministry acting as the Secretariat of the Government Council. The Minister of Finance is a strong supporter of SINE because it provides a vehicle for better monitoring of public investments and budgetary performance. Capacity strengthening plans must be created within the context of improving public sector performance: This requires a clear understanding of the organizational and institutional prob- lems that impede performance improvement. Experience shows that at the root of performance impediments in the public sector in the countries where EVO has collaborated is a lack of coherent interpre- tation of goals and objectives. The performance of any evaluation system will depend on the clarity of centers of responsibility, institutional missions and goals, and a government's goals and objectives. Strengthening Evaluation Capacity in the Latin American and Caribbean Region 151 Start by ensuring the existence of an appropriate step-wise framework: The first steps in the countries where EVO assisted in the development of a national evaluation system established simple mechanisms that could be implemented in the short term, such as the enactment of laws to launch the system through executive decrees, rather than starting new complicated legal and organiza- tional processes that would require long debate and approval periods. Develop useful performance measurement methodologies: The second phase has been to design simple, useful tools (helpful in light of the growing pressures for improved accountability and greater value-for-money performance at all levels) to demonstrate the relevance of program performance measurement. The experience of the Evaluation Office in initiating activities to strengthen evaluation capacity in four countries and three sub- regions has drawn the lessons for future country-specific interven- tions discussed below. Factors to consider in instituting evaluation: Costa Rica January 29, 1996, represented a hallmark for Costa Rica's National Evaluation System. This was the day that the Minister of Planning presented the president with the conceptual framework and meth- odology that the system will use to measure the performance of public sector agencies in the administration of government priority areas. The president approved the system and ordered it to be implemented during 1996. By March 26, a total of 26 government agencies had entered into a compromiso de resultados, which stipu- lated indicators, weights, and values that would be used to evaluate their performance at the end of the year. The following issues were considered in the strengthening of the Costa Rican evaluation system. EVO organized the initial 152 Jean S. Quesnel Box 14.1: Strengthening evaluation capacity * Develop a legal and organizational framework. Assess existing laws and organizational structures. Changes could be necessary to institutionalize evaluation and ensure its sustainability. * Identify opportunities at the national and sectoral levels to support stakeholders interested in using evaluation results. * Incorporate evaluation within the context of public sector management issues. * Incorporate evaluation and monitoring arrangements into project design and implementation plans, including the use of indicators and mid-term evaluations. * Use project initiation workshops, mid-term reviews, public expenditure reviews, and other regular activities to encourage and apply lessons of formal and informal evaluations of policies, programs, and projects. * Promote joint evaluations of Bank-financed projects and encourage other international cooperation agencies to do the same. Promote in-country coordination with the resident AID community. * Promote beneficiary participatory evaluations. Promote self- evaluations. * Ensure that appropriate skills, training, methodologies, and standards are provided. Promote the inclusion of private sector and academic institutions as a means of validation. benchmarking workshop and presented the Costa Rican government with best practices in instituting the evaluation function in a systematic way. The list of issues and qtuestions presented here is in the order in which thev were considered hy the coordinating ministry (Mlinistrv of Planning) during the building of the svstcm and the development of its methodology. Strengthening Evaluation Capacity in the Latin American and Caribbean Region 1 53 Relation to government agencies * What is the system's relationship to the national development plan? * What is the relation of the system to the management of the national budget? * How does it relate to public sector expenditures? * How does the proposed evaluation system relate to the mandate of the Controller General or other independent oversight institutions? * What are the legal requirements for establishing the evaluation system? Institutional and organizational arrangements * What is the appropriate division of responsibility among the central government, line ministries, semiautonomous organiza- tions, and provincial and local authorities? * What should be the rules and regulations governing the evalua- tion function? Where should the function reside? * Independence of the function * Links to the presidency, and the planning, finance, and budget systems * How does the evaluation system relate to the legislature? * Requirements of individual agencies' line management and the role of self-evaluation * Relations of the system to research, academic, and private sector professional centers. Scope, coverage, focus * Will the system cover all policies, programs, and projects, or will it focus on priority areas? 154 Jean S. Quesnel * Will the system cover all public sector resource uses, or only those financed through public debt? * Will the system cover all sectors, including decentralized agencies? * Will the system have priority areas of coverage? * How will it look at impacts and the human development incidence of public resource use? * How will it deal with the accountability aspects of public sector decisions? * What feedback mechanisms will it require? * What is the optimal internal organization for the agency responsible for the coordination and administration of the system? * How will it report on evaluation reports and how will it conduct follow-ups? Financial resources needed, staffing and information requirements * Are information systems adequate? * Financial resources needed * Professional skills and training * Methods, guidelines, and norms. Strengthening Evaluation Capacity in the Latin American and Caribbean Region 155 Box 14.2: Multiyear program ~~~~of Eofi strngheing Prvds orwrtwt upr ontyOfcs Paeregiona Plac evalatio p6ongoing supr fr ad eabihprnerhp evauators inental offcrs Rinth Prlutomo stedand faiiat dntf andseurtScue technAica Enblcrgth public/prmiivaesetafor fudig uraratncertin~ funingto iroth fregiona p rtnes athip inAeia ofrne supr rvt etrealuation evlaionie d cappacity. ih modernstate.ponsutat iveserieofSecr E tabl iish and FaiitaeteOgnzse ofconference,orTCin goverancei and nouprihcapteors streng-thening eandato moderiza~tion offthe rqiand mebrship.n crteato of newok ofprstate. b a iit( in the region.~ ~ ~ ~ ~ ~ ~~ppoeddiig h eauain n evaluatin studis in pulic admnistraton of tear evlatoublic public administration programs. curriculum,administrato progrmseattheSadQevauatio Part Ill: Strategies and Resources for Building Evaluation Capacity 15 Development Through Institutions: A Review of Institutional Development Strategies in Norwegian Bilateral Aid Stein-Erik Kruse In the early 1990s the Norwegian government introduced two major shifts in its bilateral aid policies. First, institutional development was made a central means of promoting long-term, sustainable growth. In NORAD's strategy (NORAD 1990, 1992), recipient responsibility became the guiding principle. Southern partners should be fully responsible for designing and implementing their own policies and programs. The donor should assume the role of a partner and nurture effective and stable organizations and institu- tional frameworks in the public and private sector. The second shift was associated with the establishment of the so- called Norway Axis. This called for the institutional collaboration of Norwegian public directorates, universities, private companies, and nongovernmental organizations with like-minded Southern partners as the dominant mode of technical cooperation (NORAD 1996). The new priorities have been increasingly reflected in country strategies and all bilateral aid programs. In 1997 Norway's Ministry of Foreign Affairs commissioned a series of studies on how institu- tional development is promoted through different channels of its bilateral aid program. Five reports were released in mid-1998. The Center for Partnership in Development (DiS) designed and coordi- nated all the studies and prepared a Synthesis Report examining both the differences and similarities across channels. Independent teams from Canada, Sweden, and Denmark covered institutional coopera- tion in the public, private, and civil sectors. 159 I will now attempt: * To explain the changes in Norwegian aid policy linked to the rediscovery of institutions in development * To present a framework for defining and analyzing institutional development * To summarize key issues and challenges identified in the studies. Why study institutional development? Institutional development and capacity building are today pursued as aims by most multilateral and bilateral donors, yet as concepts both remain overused and underdefined. A recent review of state- of-the-art institutional development finds that there is probably no other area of development policy where so much money is spent in pursuit of an objective whose very name, as well as content, is subject to such basic and continuing dispute (Moore 1995, p. 9). There has been more effort to justify institutional focus than to provide clear aims for the future. The need to strengthen institu- tional capacity in developing countries seems obvious, but what does it mean, how is it carried out, and with what results? There are still too few evaluations and studies documenting successes and achieve- ments to justify the large investments in institutional development and capacity building efforts from donor countries.2 This was the reason for undertaking a comprehensive study with the following goals: * Examine and compare institutional development strategies, experience, and results within Norwegian bilateral aid. * Find out which factors influence and contribute to institutional development, and how public and private organizations in the South change and interact within this context. * Contribute toward revising strategies for improved institutional development. 160 Stein-Erik Kruse The study was designed primarily to deal with the themes of institu- tional development and institutional cooperation-and not to examine individual countries or evaluate specific programs. It is organized around five elements and a number of key questions, as detailed in box 14.1. Box 15.1: Study elements and questions Concepts and intentions * What do the organizations say they will achieve through institutional development? * To what extent are the intentions of the cooperating organi- zations clear, consistent, and shared? Strategies and actions * How are intentions operationalized and carried out? * To wvvhat extent are strategies and actions coherent, adequate, and relevant? Relevance and results * To what extent are strategies and actions effective (in reaching objectives) and efficient (results compared to costs)? * What are the results at various levels? Explanations * What factors exptlain variation in results? * How do such factors promote or impede results? Comparisons * What comparable international practices are best? * How do experiences and results compare across sectors! channels? * How do results of alternative strategies for institutional development compare? Development Through Institutions: A Review 161 of Institutional Development Strategies in Norwegian Bilateral Aid The transformation of Norwegian aid policies The broad idea of institutional development gained new importance in the 1990s in both Norwegian and international development cooperation. This was a result of the growing realization of the role institutions and organizations play in the development process. In Norway the idea focuses, in particular, on the ability and capacity of developing countries to design and implemenit their owII policies through the growth and nurturing of effective organizations and institutional frameworks in the public, private, and civil sectors. Boxn 5.2iIntittional development. h ocp niscirn policysno Meaure tvo strengthe imp1iortn tsoc:iaist oituion ndogaiations: will beken aroundloreas oflong-tehr Whiper apon.Isrona 956laand hmanresource develp991m9ent withe b gien graterg es as pr1iorityareas. In this Nonnecion government cons ist importantzt oid th bestional dpossibleop ntditis fosrparticplatilonbty a broadrang of tNorwegiacn experdse and ijisftxlitutois Cl(Ootper.ation wilntbeiie o teghnigpbi initiautioans rutowilliso inof alu e institutions in busiesand civil sciety. ionrevelWhite PaptrersN. 19st1995i96a p.42-43. g,orcaact Norwegian policies provide a broad framework for understanding institutional development. The concept in its current form has not been around long. Through White Papers No. 19 (1995-96) and No. 51 (1991-92) and NORAD's new strategy (1990 and 1992), the Norwegian government has strongly emphasized institutional development. Sustainable development is said to depena on the initiative and responsibility of viable public and private institutions in developing countries. Institutional strengthening, or capacity building for sustainable development, has become a cornerstone and an important rationale for Norway's involvement in international development cooperation. 162 Stein-Erik Kruse "The Norway Axis": The virtues of institutional twinning From 1993 onward, the so-called "Norway Axis" was officially established and systematically pursued, linking institutions in the South with like-minded partners in Norway. While institutional development was perceived as the goal, cooperation among public, cultural, and research institutions; private companies; and nongov- ernmental organizations became the means. There are other ways to reach the same goal, but it has been assumed that these types of collaborative arrangements have comparative advantages over other forms of technical cooperation (NORAI) 1997). Box 15.3: Development cooperation NORAD must actively encourage participation on the part ofNorwegian organizational and institutional life in development work. By means of active participation on the part of Norwegian organizations and institutions, NORAD will be able to draw upon competence, capacities, and resources which we would otherwise not have access to. Through increased external participation. Norvegiansociety and public opinion in general will be enabled to identify themselves more strongly with, and show greater appreciation of, Norwegian development cooperation and the challenges and problems which this entails (NORAD 1992). Rethinking technical cooperation The new NORAD strategy was also based on a critical rethinking of previous modes of technical cooperation. In 1993 NORAD decided to reduce the recruitment of long-term individual Norwegian experts because of the meager results in knowsledge transfer and the poor integration of aid efforts into national institutions. The principal conclusion of Nordic evaluation of technical assistance in 1988 was that technical assistance personnel were usually highly Development Through Institutions: A Review 163 of Institutional Development Strategies in Norwegian Bilateral Aid effective in operational positions, but much less so in transferring skills and in contributing to institutional development (Forss and others 1988). Traditional technical assistance was strongly criticized by developing countries for being ineffective in building institutions or creating capacity; as too costly at both macro and micro levels; for being donor-driven; for distorting national labor markets; and for ignoring the dramatic change in the level of skills, knowledge, and confidence among educated citizens in developing countries (Berg 1998). An institutional approach emerged as the alternative. In the early 1990s there was strong political motivation in Norway to favor more extensive use of Norwegian expertise and institutions in development aid. The time was right for aid authorities to mobilize stronger interest in, and understanding of, international develop- ment. More financial resources would be channeled through Norwegian actors, and it was important to fill the gaps in NORAD's capacity to manage increasing budgets. With active support from NORAD's former director general, the agency changed its approach to technical cooperation and started to recruit Norwegian institu- tions to cooperate with similar organizations in partner countries. In the public sector, recruitment was based on the identification of twinning partners in health, fisheries, higher learning, petroleum, and so on, and the number of collaborative arrangements with NORAD funding increased steadily. The following figures illustrate significant changes: - The number of long-term Norwegian experts was reduced from 250 in 1985 to 50 in 1993. * Thirty-five public institutions are currently involved in over 100 institutional development projects. * More than 80 nongovernmental organizations support approxi- mately 1,000 large and small projects on three continents. * Eighty private Norwegian companies interact with an increas- ing number of companies in the South. 164 Stein-Erik Kruse The rediscovery of institutions in development The move toward institutional strategies among aid agencies was also influenced by new theoretical insights from economics and the social sciences. The fundamental question has long been: Why do some countries prosper while others do not? Access to natural resources-land and minerals-was at one time considered to be the critical prerequisite for development. Gradually thinking changed, and physical capital-infrastructure, machines, and equipment-was held to be the key. Later, other factors, such as human resources and their potential to increase the speed of development, attracted much attention. In the 1980s, through the influence of the World Bank and the IMF, focus shifted to the role of sound policies in explaining countries' differential growth. Since the State was unable to deliver on its promises, and State-dominated development strategies had failed, the logical end seemed to be a minimalist state of doing no harm, but not much good either. More recently the pendulum has swung back-to the quality and effectiveness of a country's organizations and institutions, and in particular to the capability of the State. Awareness of the role of institutions is not new, but its rediscovery in leading economic circles shows that the hegemony of neo-liberalism has passed its peak. The W"orld Bank devoted a major part of its 1997 World Development Report, The State in a Changing World, to such issues, and firmly states that: This extreme view (of the minimalist state) is at odds with the evidence of the world's development success stories, which shows that development requires an effective State, one that plays a catalytic, facilitating role, encouraging and complementing the activities of the private businesses and individuals. Certainly State-dominated development hasfailed. But so has Stateless development. Without an effective State, sustainable develop- Development Through Institutions: A Review 165 of Institutional Development Strategies in Norwegian Bilateral Aid ment, both economic and social, is impossible. The State is central to economic and social development. The debate on State-market relations has come full circle with theoretical support for the role of the State from new institutional approaches in economic theory (North 1990).3 A key message is that development is not just about the right economic and technical input in a free market of individual actors, but also about the underlying institutional environment: the rules, regulations, customs, ideologies, and norms that determine the rules of the game, and consequently the effectiveness of the market. Markets and govern- ments are complementary: the State is essential for putting in place the appropriate institutional foundations for markets. The formal rules, along with informal social values, are embodied in institutions that mediate and shape human and organizational behavior. This concept of institutions is very different from the traditional focus on the provision of skills, equipment, and resources to individual organizations. The emphasis now is on the external environment, pressures, and incentives guiding individual and organizational performance. While much of this material remains in the academic domain, several insights and perspectives can be applied to issues directly relevant to international development. There is a strong link between institutional development and sustainability. Progress toward sustained and self-reliant development depends on the strength and quality of a country's institutional capacity. This is because socio-economic progress requires people to coordinate their behavior, which in turn requires institutions that provide incentives to cooperate, and organizations that bring people together for concerted action. At the level of programs and projects, it has become increasingly clear that many of the real problems linked to development aid reside not so much in intent and thrust as in execution. And these problems are often organizational and managerial-rooted in 166 Stein-Erik Kruse difficulties experienced by local institutions in getting things done, and thus represent a major bottleneck for economic growth and development (Berg 1993). What is still weak or missing in many developing countries is the institutional infrastructure that can carry out the difficult task of converting policies into services useful to its citizens, and the commitment to change that will make institutions work. An aid activity is not successful unless it takes into account the institutional environment and strengthens institutional and organizational capacities. The current move toward multidonor-supported, sectorwide approaches has also spurred renewed interest in institutional issues, since such programs depend on the quality and functioning of national institutions, and as such are closely linked to the discussion of reforms and changes in the public sector. A framework for institutional development Four perspectives: The discussion of institutional development among aid agencies has sought to address and balance four perspectives: * Increasing the effectiveness of program implementation * Strengthening the capacity of organizations and institutions in developing countries to take responsibility for their own development * Restructuring the public sector as a result of political and economic conditions * Addressing the formal and informal policies, rules and regula- tions, cultural norms, and values in society. The first perspective is the most technical and instrumental. Institu- tional development is perceived in terms of social engineering related to effective and efficient program delivery. The second perspective is primarily organizational. The aim is to build institu- Development Through Institutions: A Review 167 of Institutional Development Strategies in Norwegian Bilateral Aid tional capacity as a prerequisite for the national execution of programs and for self-reliance. The third perspective demonstrates that the process is neither neutral nor apolitical. When institutional development is placed in the context of public sector restructuring, it becomes closely linked to national political processes (Engberg-Pedersen 1993). Institu- tional development is as inherently political as it is technical. The institutional structure of a country or government reflects the existing relations and distribution of power and resources. Problems and low performance are not only caused by lack of resources or knowledge, but also by political conflicts and institutional crises where the legitimacy and support of public and private institutions are questioned or undermined. The fourth perspective is the most complex, since institutional development is here rendered inseparable from cultural develop- ment. From a sociological perspective, an institution is a societally valued and sanctioned norm or rule of the game that guides and constrains individual and group behavior. Thus, private property or kinship obligations would qualify as institutions. In this sense, it is not possible to think of developing institutions without attendant cultural changes (Hirschmann 1993).4 The four perspectives illustrate the potential scope and complexity of institutional development. A framework for analyzing institu- tional development should thus cover a broad range of issues and levels. A multidimensional model: There is no universally accepted definition of institutional development. There is a rich semantic pluralism, but a review of relevant literature indicates that the underlying concerns and processes in institutional development share important similarities. We have not invented a new set of definitions, and were partly guided by UNDP, but decided to use institutional development (which was most common in Norway),' and not capacity development, as the broader term (UNDP 1994). 168 Stein-Erik Kruse Itnstitutional cilevcloprnent is here defined as the process by, wi icli inldividulals, orgatnizationls, atiti instirtltions inc-recase thieir aibilities and perJfodma iice in relationi to their goals, resources, and ntiviron- inent. In this definition, institutional developmenit has three dimenlsionls, which address five different levels. Hu[nian resources development is concerned with how people are educated and trained, howv knowledge and skills are transferred to individuals and groups, and how competence is built up and people Table 15.1: Dimensions of institutional development Processdimension Level Focus Human resources lndividuals and Competence, development groups motivation Organizational Organizations Structures, development processes, and systems System development Network linkages Patterns of communication/ collaboration among organizations Sector Policies,rules, legislative framework Overall context Macro-level policies and conditions Cultural values, norms, and traditions Development Through Institutions: A Review 169 of Institutional Development Strategies in Norwegian Bilateral Aid are prepared for their current or future careers. This represents the first and basic building block of institutional development.6 Organizational development has another entry point, and seeks to change and strengthen structures, processes, and management systems in organizations to improve organizational performance. There is some variation in approaches to organizational develop- ment, but in its pure form it has the following characteristics: * Focus on individual formal organizations, particularly on their internal functioning * Less attention paid to external contextual influences on performance * Major activities and inputs include education, training, techni- cal advice, and equipment * Organizational change occurs as a result of planned internal changes (in management, culture, administration, and so on) with support from external inputs. System development is not a common term in development coopera- tion,7 but seeks to capture what goes beyond organizational develop- ment. It is a broader concept and brings in the organizational context. It includes an emphasis on links between organizations and the context within which organizations operate. While organizational development starts inside an organization, system development extends from the organization to its links and interactions with the external environment. It also relates to how individual and organizational behavior is regulated and affected by external constraints, pressures and incentives, norms, and rules. Contrary to former organizational perspectives, system development assumes that organizational innovation also requires changes in external variables. There are different types and levels of system development: 170 Stein-Erik Kruse * The network of links between organizations. This includes the network of collaboration between organizations that facilitates or constrains the achievement of particular tasks and under- lines the interdependence of organizations. * The sector environment. This refers to the overall policy and institutional environment of the public, private, and civil sectors that constrains or facilitates organizational activities and affects their performance. This includes policies, laws, regulations, and financial resources. * The overall context. This encompasses the broad action envi- ronment of the organizations, beyond the sector, including the political and socioeconomic milieu (macropolicies and conditions) and the prevailing cultural norms, values, and traditions that facilitate or constrain the functional capacity of organizations. A distinction is implied between organizations and institutions. Organizations form part of the fabric of institutions. Organizations can be changed and even eliminated without affecting the institution itself. A particular ministry may be abolished, but the government will carry on.8 Structures may change quickly, but not their guiding rules and norms. It is important to keep in mind the different time perspectives. While human resource development often has a 1-2 year perspec- tive, organizational development needs at least 3-5 years to make a sustainable impact. System development at the highest, or rather deepest, level means more than structural or functional changes, and requires a long-term perspective. It involves fundamental social and cultural change and is often a more profound, long-term, and complex process than organizational development. Some of the changes would be beyond the reach of donor-funded technical cooperation programs. The development of human resources and organizations may lead to increased effectiveness, while system development may lead to enhanced Development Through Institutions: A Review 171 of Institutional Development Strategies in Norwegian Bilateral Aid Box 5.4: Organizations and institutio,ns A4 &disputed doten lurred ie is draw eenra ation andgintitutionsa soineity iso ixmportan to mainptainceb larg dioustifction lbetween) ognizational and institutionalddevelopment. a n t lhuen- 00tia literxatur anan he difference,and mo0stoor efer to diffeet tan houginterdepeenden prcesses Inflccuencedoa byls and pet-scctives Institutional cooemaics sisv l re p tssuest thatlid, lnstitutionspera tiobrepresent thoeresiolthegame l ind soie-thenorms iand resthe No wich :0 Sguide anStd ;tontisraini th;e behavi0ngor f indi:viduas and; ;; N;;:t 0; norganuiato acndsheapaet humantenteraon,ng winoneolerot organizatios 00jare; thie uactors 2or plaers (North 1990 and jBatest 1995).Q0 tl tf 5 (0; 2 i0 j0 05 Therulesdefine the way4s a gamne is playd, iwhile t:he0players itrySifNf to0 wi te 0gamile;by a csombinatioenofsills statey,and coorfdi- ;S;t nations. Ideollow the oertiofnrules,orthrougherts tl chan ge b ;the;;m.tVS 0S Anoter sociolog >ical approach: defines institutions as 00 '<;gpatterns:of beaior that are valued withinl a cu:lture?': eIn 1both y ; ) ficaes ntittio1b:nal deve opmentrefest civitisgae d 4U towardl0 gudn an reguatin theenvrnent in:i w0hichoraizeation:;;gs90; operate.a Institeutiaonaltdbevel to the mnstitutodevelopme wtorkthe i wc teion E re p d t is id idalso llo ws oors 0to elwth relevant natona theme and 0;t ; policdies rthUder than oly gproec8ts andproNgrMamisA.TS Eft0u;j0 i;Ai:u5;Q$ ;t iV legitimacy in a society (for example, to acceptance by large groups of the population). Institutional development is the sum of all these dimen- sions and depends, in principle, on all levels and perspectives. InstitutionIal cooperation is defined as formalized, long-term coopera- tion between two similar or like-minded organizations in the North and South to achieve capacity strengthening in one or both organiza- tions. Ideally, the cooperation should move beyond technical assistance and contribute to the institutional development of the 172 Stein-Erik Kruse receiving organization. It was emphasized in the studies that there is no intrinsic correlation between institutional collaboration and institutional development, and a key task for the studies was to analyze the degree of correlation. Current issues and challenges Several important issues and challenges emerged from the studies that are relevant to Norwegian bilateral cooperation, as well as to participation by other donors. The need for conceptual and strategic clarification All studies confirmed that significant changes took place in Norwe- gian aid policies in the early 1990s. A discernible increase in the number of Norwegian organizations and in the levels of funding for institutional cooperation and development was found. Strong commitment to, and broad support for, the reforms were seen, even if knowledge and awareness were diluted, and decreased proportion- ally with the distance to NORAD's central offices. However, the concept of institutional development was still found to be overused and underdefined. Policy changes and principles are clearly explained and justified, but the suggested options are not sufficiently clear. A mixed terminology is used in all channels. Similar terms have multiple meanings, and few operational objec- tives allow the organizations to target institutional development effectively. The synthesis concludes that the vision and overall objectives point in the right direction, but that NORAD still lacks an operational strategy or a policy of the middle range. Evidence from the studies shows significant differences between the channels in their receptivity to institutional development ap- proaches, and the likelihood that the new approaches will feature centrally in speciflc projects. In brief, nongovernmental organiza- tions (NGOs) are "true believers"' and partnership is a general feature of their work today. Public sector institutions seem open to the new Development Through Institutions: A Review 173 of Institutional Development Strategies in Norwegian Bilateral Aid approaches and have made progress in consolidating institutional development, thanks to the individuals involved, demands from Southern partners, and other situation-specific reasons. Private sector firms, in contrast, have not understood or have not been willing to adopt the new development terminology, although they may well promote activities that are consistent with institu- tional development, as long as these activities promote their own objectives. The suggested solution is not a simple formula for institutional development that can be operationalized with ease, applied in all sectors, and solve the conceptual problem once and for all. There are no blueprints available for successful institutional development and experimentation with alternative models is encouraged. Even so, a common point of reference and an operational strategy are required to guide the process. A situation should be avoided where all organizations have their separate definitions of a collective strategy, or where new labels are simply put on old practices to prove they are working in line with new strategies. Searching for a holistic approach Institutional development programs are found to be too one- dimensional, and in all sectors tend to be equated with human resources development (education and training), the provision of equipment, and the building of infrastructure. The systemic aspects, which go beyond the strengthening of individual organizations-the formation of organizational linkages, reforms in the specific sector environment, and the broader societal context-are not sufficiently addressed. There is increasing awareness that institutional development goes beyond building competence and organizational strengthening. However, the Norwegian organizations are not well-equipped to deal with such issues, and Southern partners consider the broader system, the perspectives of institutional development, to be too political. 174 Stein-Erik Kruse The planning of institutional development should be based on a broad systems approach and should consider all levels of interven- tion. Norway as a donor country should carefully consider what levels and range of intervention Norwegian aid could and should support. Confusing means and ends Is institutional cooperation a means or an end? Does institutional cooperation between Norwegian and Southern institutions represent a value in itself, or is it merely a tool to reach other aims, such as institutional development? What kind of connection exists between institutional cooperation and institutional development? Does the former necessarily lead to the latter, and what are the conditions for a positive correlation? The studies found considerable confusion of means and ends, to the point that institutional cooperation became an end in itself. It was also thought to have inherent qualities that lead to institutional development. A key message from the studies is that institutional cooperation should be defined as a means, and it should be used if it is found to be the most effective and efficient response to the problem or task at hand. Norwegian institutions should be involved, provided that they have the required skills and capacity to work in developing coun- tries. Their involvement should not be based on a principle or on the perceived right of those organizations to channel and administer Norwegian aid. This implies that the quality of Norwegian organizations varies a great deal. Several of the public and private institutions are consid- ered professional within their fields, but this does not necessarily mean that they are development agencies with the knowledge and technology for problem-solving in developing countries. Neither are they obvious or exclusive partners for NORAD-even if some of them would like to be. Development Through Institutions: A Review 175 of Institutional Development Strategies in Norwegian Bilateral Aid No intrinsic correlation was found between institutional coopera- tion and institutional development. The former may lead to the latter, but only under certain conditions. The studies identified a number of such success factors. The need for flexible and adaptive strategies for technical cooperation The dramatic shifts-in approach to technical cooperation from individual advisers to twinning arrangements build on a donor belief that a "universal best practice" exists. The studies, however, pre- sented a blurred picture, with few pure approaches. Individual advisers were part of the twinning arrangements, but they were recruited by the Norwegian organizations and not as NORAD experts. Southern public organizations recognized the virtues and benefits of twinning, but were wary of having too many short-term consultants. Some also expressed a the need for a resident coordina- tor to serve as a link between the Norwegian and the Southern organizations. The main conclusion was that some best, most effective approach does not exist, while some approaches are better than others, given a proper assessment of needs and opportunities in a given country. While donors tend to look for universal "best practices"' the case studies call for flexible strategies that are well-adapted to varying country contexts. Exploring new opportunities New policies of institutional development have led to more than symbolic changes, but program realities tend either to remain unaffected or to encounter problems in incorporating new ap- proaches. Institutional cooperation provides Norwegian organiza- tions with new professional challenges in areas of organizational and institutional development. However, those challenges and opportuni- ties are not yet adequately addressed or explored. The organizations 176 Stein-Erik Kruse follow traditional patterns of knowledge and technology transfer, and do not seem sufficiently equipped to deal with complex organiza- tional and institutional issues. Dilemmas of recipient responsibility Few bilateral agencies have emphasized the principle of recipient responsibility as forcibly as NORAD. According to such an overall principle, Southern partners should not be restricted to seeking services and advice from only Norwegian partners. Institutional cooperation has elements of, and can increase the potential for, "tied" aid, which is not in line with Norwegian aid policies. In the future, alternative ways of providing technical assistance should be encouraged, in addition to twinning. There are examples of Norwegian organizations, only recently entering the development scene, that have been shielded from the "Norway Axis" by preferen- tial treatment. At the same time, Norwegian organizations should be used to their full potential, given their actual comparative advan- tages. A study of university collaboration in Tanzania illustrates the other side of the dilemma. In this case, a recipient institution minimizes the involvement of a former Norwegian counterpart over time to the point where it could soon be left out-in the name of recipient responsibility. The need for a strategic and responsible donor All studies discuss the role of NORAD. The donor is found to be an important facilitator of institutional development. There are cases where NORAD played an important role in creating necessary changes in ongoing programs or in facilitating new ones. But there are also cases where NORAD was only marginally involved, or not involved at all, in the planning process, and the institutional compo- nents were ignored or only weakly developed. The critical variable is not the level of involvenfient, but how strategic a role the donor plays. It was strongly recommended that NORAD Development Through Institutions: A Review 177 of Institutional Development Strategies in Norwegian Bilateral Aid should not become solely a financier or bureaucratic controller, but rather take an active part in the professional dialogue and the consultations on strategic issues, thus promoting and safeguarding institutional concerns. Institutional development is a complex phenomenon that does not happen by default. It requires systematic preparation and clear policy guidance (and sometimes functions as a "watchdog"). The principle of recipient responsibility should not exclude the donor from assuming an active, responsible role, contributing strategic input at critical junctures in the program cycle, as long as the final decisions and implementation rest with the recipients. Recipient responsibility does not imply a passive donor. Need for systematic analysis and preparation The systematic appraisal of institutional issues in the planning of new programs was missing in most programs. They also suffered from weak linking of analysis with action. The need for proper institutional diagnosis to precede institutional prescriptions was poorly understood. Institutional development is not yet sufficiently based on the diagnosis and analysis of the current logic and func- tioning of institutions in each individual context. Searching for impact and methods All studies confirm that systems for monitoring and evaluating changes in institutional development are not yet in place. There has been a lack of both effort and method in the evaluation of the processes, effects, and impact of institutional change. Activities at the level of human resources and competence building were, to a large extent, found to be successful in achieving short-term objec- tives. However, a lack of data and methods prevents proper evalua- tion at the organizational and institutional levels. There has been little effort to discuss and evaluate the potential links between institutional development and overall Norwegian develop- 178 Stein-Erik Kruse ment objectives, such as poverty reduction, gender equality, and environmental protection. Institutional development was in some cases seen as an end in itself, and not a contribution to long-term social and economic development. There are links between institu- tional development efforts and long-term goals, but these are often weak, confounded by other, external factors. Traditional evaluation methods are poorly suited to measuring their strength and direction. Huge resources are currently invested in various forms of capacity building. Such investments should be caarefully examined. This would necessitate new and innovative evaluation methods, leading to more information on effects and impact. Institutional change in a Southern perspective: Does culture matter? A key concern in all studies was to understand and assess institu- tional change from a Southern perspective. The studies sought to examine cultural variables and discuss how organizations and institutions in the South understand and perceive the new institu- tional strategies promoted by Northern donors. National consultants were involved in all studies, and special efforts were made to docu- ment the opinions of Southern stakeholders. The studies explored, rather than confirmed, difficult and sensitive issues. The findings indicated that many of the common-sense assumptions concerning the nature and process of institutional development may have to be reconsidered. Its origins, workings, and results are far more dependent on specific circumstances than had been assumed. In general, no major cultural conflicts or differences appeared in the case studies. Institutional development efforts did not contradict the policies or interests of Southern partners. Institutional development initiatives came as frequently from Southern organizations as from their Norwegian partners. No alternative organizational models or options were suggested by Southern partners. Their major concerns were at the level of human and organizational development, and not Development Through Institutions: A Review 179 of Institutional Development Strategies in Norwegian Bilateral Aid at the broader level of system development, which they considered to be too political. However, the studies were unable to delve deeply enough into the relationship between culture and institutional development. What kinds of institutions are suitable and relevant to development in different cultures? The basic question remains: Are we imposing our ideas of what it takes to make organizations and systems work without adapting them to culture-specific contexts? Most agencies pay lip-service to the importance of "culture," while in reality cultural issues are not well integrated. 180 Stein-Erik Kruse Endnotes 1. The reports are: "Twinning for Development, Institutional Cooperation between Public Institutions in Norway and the South" (Christian Michelsen Institute, Norway; incorporated in the report is a separate study on higher learning in Tanzania: "Cooperation between the Institute of Development Management (DM, Tanzania) and Agder College, Norway"; "Institutional Cooperation between Sokoine University and Norwegian Agricultural University" (COWI, consult, Denmark"; "Development through Institutions: A Study of Private Companies and Private Consulting Firms in Norwegian Bilateral Assistance" (Andante Consultants AB, Sweden); "NGO Study on Institutional Development" (North-South Institute, Canada); "Institutional Development in Norwegian Bilateral Assistance,' Synthesis Report (Centre for Partnership in Development, Norway.) 2. Analysis of OECD/DAC data revealed an almost doubling of aid classified as institutional development as a share of total aid from 1987-89 to 1993-95 (DiS, Synthesis Report). 3. At the start of this decade, economic historians, including Nobel laureate Douglass North, who had demonstrated the central impor- tance of institutions in explaining past economic performance, began to turn to today's economic development efforts. 4. Institutional economists provide interesting insights into how moral norms and social values explain the viability and efficiency of the market system (Platteau 1994). The first argument is that private and public order institutions are needed to create order in the market. The state has a critical role that goes far beyond that of establishing or strengthening mechanisms for control of fraud and deceit. The second argument is that moral norms sustain honest behavior by generating trust and the right kind of preferences, since such norms can act as a substitute for state-engineered rules and control. With reference to developing countries, it is argued that economic development is especially difficult in countries where Development Through Institutions: A Review 181 of Institutional Development Strategies in Norwegian Bilateral Aid norms of limited group morality prevail and do not give way to generalized morality. 5. Morgan and Qualman define capacity development as a broader concept than institutional development, but it is difficult to gauge the difference. UNDP and the World Bank (in Asplan 1993) use the terms interchangeably. 6. Institutional development would most likely include and depend on training and educational components, but it is not necessarily true that all training and education have an organizational or system development objective. 7. System development is often called institutional development, but we have found it useful to distinguish the encompassing term and the systemic elements that go beyond organizational development. 8. Uphoff (1986) provides as examples: (a) some institutions are not organizations (a law or a legal system), (b) some institutions are also organizations (Court of Law), and (c) some organizations are not institutions (law firms). 182 Stein-Erik Kruse References Asplan Analyse. 1993. Capacity-Building in Development Coopera- tion: Towards Integration and Recipient Responsibility. MFA Evalua- tion Report 4.93, Copenhagen. Bates, R.H. 1995. Social Dilemmas and Rational Individuals: An Assessment of the New Institutionalism. In Harriss and others, The New Institutional Economics and Third World Development. Berg, E. 1993. Rethinking Technical Cooperation: Reformsfor Capacity Building in Africa. Regional Bureau for Africa, United Nations Development Programme and Development Alternatives. New York. Berg, E. 1998. Problem notat omfaglig bistand med saerlig vektpa institusjonsutvikling. MFA, Oslo. Dahlen, A., Hauglin, O., and Mudenda, G. N. 1993. Capacity-Building in Development Cooperation: Towards Recipient Responsibility and Good Governance. Zambia-Country Case Study. Asplan Analyse, Sandvika. MFA Evaluation Report. Engberg-Pedersen, P. 1993. The State of the Art in Institutional Development: Evaluation of the Role of Norwegian Assistance in Recipient Countries' Administrative and Institutional Development. Asplan Analyse. MFA Working Paper, Copenhagen. Engberg-Pedersen, P., Hammar, A., and Hauglin, 0. 1993. Capacity- Building in Development Cooperation: Toward Recipient Responsi- bility and Good Governance. Zimbabwe-Country Case Study. Asplan Analyse, Sandvika. MFA Evaluation Report, Copenhagen. Forss, K., Carlsen, J., Fr0yland, E., Sitari, T., and Vilby, K. 1988. Evalua- tion of the Effectiveness of Technical Assistance Personnel. DANIDA, FINNIDA, MCD/NORAD, and SIDA. Development Through Institutions: A Review 183 of Institutional Development Strategies in Norwegian Bilateral Aid Grevstad, L. K., and Hauglin, 0. 1995. Technical Cooperation in Transition, Review of Norwegian Policy in Light of DAC Principles on Technical Cooperation. Evaluation Report 1.95, Asplan Analyse for the Royal Ministry of Foreign Affairs, Oslo. Hirschmann, D. 1993. Institutional Development in the Era of Economic Policy Reform: Concerns, Contradictions and Illustrations from Malawi. Public Administration and Development. Israel, A. 1987. Institutional Development: Incentives to Performance. Washington, D.C.: World Bank. Moore, M., with Stewart, S., and Hudock, A. 1995. Institution Building as a Development Assistance Method: A Review of Literature and Ideas. Stockholm: SIDA. NORAD. 1990. Strategies for Development Cooperation. NORAD in the Nineties. Oslo. ---. 1992. StrategiesforBilateralDevelopmentCooperation: Basic Principles. Oslo. ---. 1997. AnnualReport l996. Oslo. Norwegian Agency for Development Cooperation. North, D. 1990. Institutions, Institutional Change and Economic Performance. Cambridge, U.K.: Cambridge University Press. Platteau, J. P. 1994. Behind the Market Stage Where Real Societies Exist. Part 1: The Role of Public and Private Order Institutions. Part II The Role of Moral Norms. The Journal of Development Studies 30(3). St.meld 19. 1995/96. Report No. 19 to the Storting (1995-96). A Changing World: Main Elements in the Norwegian Policy Towards Developing Countries. 184 Stein-Erik Kruse St.meld 51. 1995/96. White Paper 51 (1991-92). Trends in North South Relations and Norwegian Cooperation with Developing Coun- tries. Uphoff, N. 1986. Local Institutional Development: An Analytical Sourcebook with Cases. West Hartford, Conn.: Kumarian. World Bank. 1997. World Development Report 1977, The State in a Changing World. Washington D.C. Development Through Institutions: A Review 185 of Institutional Development Strategies in Norwegian Bilateral Aid 16 Project Implementation M&E: Ghana's Experience Hudu Siita Monitoring is generally concerned with efficiency, which is the rate and the cost at which specific tasks are performed. It focuses on progress in providing the inputs and outputs of projects, and it measures disbursements against established schedules and progress indicators. Monitoring also provides information to measure the achievement of objectives of planned development activity, and for the review or formulation of development policy. M&E under the decentralized planning system in Ghana According to Section 1 of the National Development Planning (System) Act of 1994 (Act 480), the decentralized national develop- ment planning system comprises the District Planning Authorities at the district level; Regional Coordinating Councils at the regional level; and Sector Agencies, Ministries, and the National Development Planning Commission (NDPC), established under Act 479, at the national level, as the bodies responsible for planning in the new decentralized planning system. This act provides for planning at only two levels in Ghana. Plan formulation takes place at the district and national levels, while coordination and harmonization of district plans, monitoring, and evaluation of the implementation of programs and projects in the districts within the region take place at the regional level. Ghana has embarked on a new, decentralized planning system. The previous planning system, the preserve of the central government, was highly centralized, with little or no participation by the citizens 187 in whose interest the plans were being implemented. The system was instituted by the government to facilitate the devolution of power from the central government to the districts and the grassroots that have become the focal points of development efforts. Monitoring roles District Planning Coordinating Units (DPCUs) were established under the Local Government Act of 1993 (Act 462) and were given responsibility for advising the District Planning Authorities on the monitoring and evaluation of programs and projects in the districts. Under the National Development Planning (System) Act of 1994 (Act 480), Regional Planning Coordinating Units (RPCUs) advise the Regional Coordinating Councils (RCCs) on the monitoring and evaluation of district development plans, while ministries, depart- ments, and agencies (MDAs) are required to monitor the implemen- tation of approved development plans. For this reason, they have established Policy Planning Monitoring and Evaluation (PPME) units (The First Medium-Term Development Plan, 1997-2000). The DPCUs and PPMEUs are required to submit quarterly progress reports directly to the NDPC, with copies to the RPCUs in the case of the DPCUs. The information gathered from such regular monitoring of project implementation will be used to adjust plans or develop programs to promote the attainment of objectives and targets. To facilitate monitoring, all agencies will adopt a uniform definition of district, thus eliminating the existing system, in which district boundaries differ among development agencies. For example, while Ghana Highway Authority (GHA) district boundaries differ from those of the Public Works Department (PWD), neither coincide with the administrative boundaries. The district boundaries of all agencies should coincide with recognized administrative district boundaries. 188 Hudu Siita The concept of lead and cooperating agencies is applied to the monitoring program. In this regard, agencies are not expected to restrict monitoring to their own narrow implementation activities only, but to extend it to the activities of cooperating agencies that have an impact on the attainment of their own targets. The NDPC is responsibile for monitoring policy and project imple- mentation and for the quarterly distribution of resources for planning purposes. In addition, it is expected to lead a team to carry out physical monitoring of program and project implementation at least once a year. The monitoring team will comprise NDPC, Ministry of Finance, a donor-representative (if the project is donor- funded), the RPCU of the respective region, and any other two relevant organizations. The MDA approach to M&E Ministries are responsible for formulating sectoral policies, pro- grams, and objectives, and for M&E, while their subordinate depart- ments and agencies are responsible for the implementation of programs and projects. Monitoring: Monitoring of project implementation is carried out in the field at the agency level, and progress reports are submitted to head offices and then to the sector ministries. At the ministry level, the PPME units analyze the reports by carrying out the following: * Measuring inputs and outputs of projects against disbursements of funds and established schedules and progress indicators * Assessing factors that interfere with outputs and progress * Identifying necessary actions and deadlines required to make the best use of opportunities for improving implementation of projects or for solving problems * Determining contractors' workloads, and so on. Project Implementation M&E: Ghana's Experience 189 Summary reports are then submitted to management, with all the necessary data on which corrective measures or future actions are based. Except for some big projects, where progress reports are sent to the MOF, the reports are for internal management purposes only. Evaluation: Except for some donor-supported projects whose conditions include evaluation, not much evaluation is done by the MDAs on wholly funded Government of Ghana projects. The agencies do not appear to make a distinction between monitoring and evaluation. For example, agencies normally prepare a project completion or end-of-project report that measures impacts, com- pares actual outputs against expected outputs, measures expendi- tures against budgets, and draws lessons with respect to cost effec- tiveness and management of the project for the future (Ministry of Finance 1996). For long-term impacts, all these MDAs may be relying on the Ghana Living Standards Surveys (GLSS) conducted by the Ghana Statistical Service. There have been three so far (in 19987/88, 1988189, and 1991/ 92) (Ghana Statistical Service 1995), and a complementary Core Welfare Questionnaire Indicators Survey (CWIQ) in 1997. In addition to measuring welfare levels for social and economic groups, the surveys also give such social sector indicators as levels of enrollment in schools, attendance at health facilities, and water supply coverage. Comparative levels of the indicators between the survey periods give a measure of the long-term impact of the government's social sector expenditure and the economic effects on society. Project implementation monitoring by the Ministry of Finance In discharging its primary function of economic management, the MOF undertakes economic analysis, public investment appraisals, budgeting, aid coordination, rationalization of debt management, and monitoring of policy and project implementation. 190 Hudu Siita The Ministry's monitoring function is similar to the evaluation of budget performance or the analysis and appraisal of investment projects; the interest in expenditure monitoring has been in recur- rent expenditure only. Issues about expenditure control and management arising from the 1993 Public Expenditure Review highlighted the need for a new focus on monitoring. Consequently, the Ministry established a Project Implementation Monitoring Unit that emphasizes physical monitor- ing; the main purpose is to assist in improving expenditure control and management. Objectives: The objectives of the project-monitoring activity include: * Determining the attainment of project implementation targets * Certifying the physical presence of project outputs for which funds have been released * Certifying the stage of the project for which payment is re- quested * Developing the history of the project by relating the schedule of implementation and cost * Ensuring that project funds are used for the purpose for which they were released * Building a database from project information for the formula- tion or review of policy. Scope and Strategy: In order to develop an effective system, monitoring began on a small scale, and was eventually extended to all the sectors. During the first year, a start was made with roads, works, and housing (hydro and water supply), followed by health, education, and agriculture (these alone usually account for about 60 percent of the development budget; they also have a significant amount of donor funding support). Monitoring is done through quarterly progress reporting and physical inspection of projects. Project Implementation M&E: Ghana's Experience 191 Quarterly progress reports (using the formats in the Appendix) are expected from all donor-supported projects that have funds paid into their accounts for the implementation of their projects, and from all projects paid for with certificates, and usually supervised by AESC, GHA, PWD, or other consultants. Monitoring of nonphysical outputs: Progress reports indicate the focus of all nonphysical activities-for example, training-and state the expected outputs/targets, indicators, means of verification, and cost. Outputs/targets are measured against the cost and schedule of implementation. Monitoring of physical outputs: Usually requests for payment for completed work are first submitted to the sector ministries that awarded the contracts. These requests are compiled and submitted to the Ministry of Finance for recording, and funds are then released in bulk to the sector ministries for the payment of contractors. It is at this point-before the release or after payment-that the Monitoring Unit of the MOF conducts inspections to verify the reported stages of completion. At times inspections are done after payment, since delayed payments could increase the cost of the projects. For inspections after payment, projects are selected at random from expenditure authorizations. The large number of projects does not permit the inspection of every one. Field visits are then undertaken to carry out the inspections. Coordination of activities: The Monitoring Unit coordinates its activities with the Policy Planning, Monitoring, and Evaluation (PPME) Units of sector ministries, as well as other relevant divisions/ units of the ministry with respect to information sharing. Some- times the sector ministries appoint representatives to accompany the MOF monitoring teams to the field. While in the field, the teams are joined by the RPCU officials from the respective regions. The RPCU is the RCC's advisory body on monitoring and evaluation. Reporting: Periodic inspection reports on progress in the field are prepared by the Monitoring Unit and presented to the Chief Director 192 Hudu Siita of the MOF. Recommendations from the findings are then made to the minister. These usually concern actions to take in relation to respond to the findings. Findings: Since its establishment in the past few years, the Monitor- ing Unit has undertaken regular monitoring exercises and several special inspections of projects in various parts of the country. The inspections revealed irregularities that adversely affect expenditure control management, and consequently the attainment of sectoral project objectives. There was clear evidence that some funds were not used for their intended purpose, but were diverted for new projects that were not in the approved budget. This was made possible through variations that are unrelated to the original projects, regardless of their implica- tions for the budget. Some unauthorized new projects were hidden under the names of completed projects intentionally retained in the budget. Some original designs indicating the scope of approved projects were unilaterally changed to bigger projects during execution. For example, standard design of 9-unit blocks for District Education Offices were changed without authority by other agencies to 36-unit blocks during implementation. Consequently, contract sums increased almost fourfold. Meanwhile, the budget volume and related transactions, from authorization to payment certificates, retained the descriptions that depicted the original designs and scope. In some cases, attempts were made to vary the descriptions of projects in order to hide their illegality. Inconsistency in the description of projects was common, and made identification of projects difficult. In other cases, misleading indications of project progress were reported, or submissions were made for work not done. In one such case, a request was made for payment for nonexistent materials on site, although the contract was not one for the supply of materials. Finally, a common occurrence has been the Project Implementation M&E: Ghana's Experience 193 use of the term "renovation" as a pretext for starting unautho- rized new construction. Follow-up actions on findings: Action or response by the MOF depends on the nature of the findings. The actions vary, and could include withholding of payment to the contractor until the project reaches the reported stage, queries to heads of departments, and outright refusal to pay. In the case of the unauthorized change in scope of the office blocks, construction was allowed to continue. The intention was to get the offending agencies to share the blocks with other agencies in need of accommodation in the same location. In that case, the defaulting agencies did not achieve their wish to have more space. At the same time, the government fulfilled its responsibility to provide office accommodation for some of its agencies. In fulfilling one of the objectives of the monitoring set-up, informa- tion gathered is used to formulate or review government policy. For example, findings influenced government policy in the 1998 Budget Statement and Economic Policy to tighten project implementation measures as an expenditure control measure in order to improve the quality and effectiveness of development expenditure. The measures for strict compliance by MDAs include the following: * Tender Boards are warned to award only projects in the ap- proved budget. D All variations are to be approved by the Tender Boards with the previous concurrence of the MOF. Projects with variations exceeding 25 percent of the original contract sum must be re- awarded through competitive bidding. * Measures are spelled out to ensure the availability of funds and compliance with all other approved project specifications before the commencement of projects. * Officers and consultants who continue to condone irregularities in project implementation are subject to appropriate sanctions. 194 Hudu Siita Privatization: Promoting private sector development as the engine of economic growth has been a priority policy of the government. This promotion is expected to cover all subsectors of the private sector, including services. Implementation of government projects is usually supervised by consultants; in most cases, they are from government consulting agencies. The revelation of several irregularities, however, suggests some failure by the government's own consultants. It was decided that in the event of any serious defaults, when project monitoring staff report doubts about quality, quantities, and actual costs of projects, private quantity surveyors would be invited to reassess costs and quantities. A provision has therefore been made in the budget to engage the services of private consultants to do reassessments when there is a need. Project documentation: There may be well over 2,000 ongoing projects in the Public Investment Program (PIP) alone that are funded under the central government budget. Several other District Assembly Common Fund projects are also currently under way. Given the difficulty of keeping track of all these projects, the govern- ment awarded a contract to the Department of Architecture of the University of Science and Technology in Kumasi, Ghana, to document all on-going PIP projects. The terms of reference included: * Documenting all projects by region, location, and sector * Determining work done on each project and the amounts paid so far * Determining the work and the amounts required to complete all the projects * Suggesting a phase-in for completing all the projects in three years, in conformity with the current medium-term expendi- ture framework period. Project Implementation M&E: Ghana's Experience 1 95 Although the contract was awarded to the university, its departments were at liberty to (and have) engage the services of private experts from outside the university to collaborate with the faculty. The information submitted will be used to assist in the preparation of the development budget and will also serve as baseline informa- tion to facilitate project monitoring. The involvement of the two groups of private agencies in monitoring reflects current thinking and appears to be the beginning of a new partnership in the monitoring of government projects. Experiences and lessons: Given the short period of regular physical monitoring of project implementation, the actions taken against defaulters and their responses, some lessons have been learned: 1. While it is too early to determine the full impact of this method of expenditure control, there is evidence of a positive response. As the monitoring team goes about its work, a decreasing trend in some irregularities has been noted. Word has gone out that the Ministry of Finance does not pay out money without asking questions, and will physically verify the justification for such payments. A real contribution is being made to the control of public finances. 2. Preventing irregularities in project implementation by enhanc- ing policies and improving procedures, while ensuring prompt payments for completed projects, appears to be a more efficient method of ensuring compliance with project specifications. 3. As long as the implementation of new projects is restricted, as is often the case, MDAs will find ways around the restriction and fulfill their desires to start new projects. One response to minimize such irregularities is to require implementing agencies to obtain certificates for the commencement of work, followed by regular physical inspections. The certificates will indicate fund availability, as well as spelling out all project specifications. 196 Hudu Siita 4. Better expenditure control is achieved where completed projects are physically inspected before payments are made. This practice has not always been applied because it could result in delayed payments, which would bring about increased project costs. Nevertheless, implementing agencies get even when inspections are carried out regularly after payments have been made, or even when the sampling method is applied. 5. In certain cases, stages of project completion are stated in advance, with the belief that MOF will delay payments, and that by the time payments are made, construction will have caught up with the predetermined stages. But some are caught when payments are not delayed and when projects are inspected before payment. This category of offenders is more likely to desist from the practice if they are assured of prompt payment. 6. The quality of work submitted by the university consultants and the time taken to produce that work suggest that private sector involvement could improve monitoring, and consequently expenditure control and management. Project Implementation M&E: Ghana's Experience 197 Appendix 1 MINISTRY OF FINANCE (Reference: MOH/PAD/PIM/9 17 May, 1996.) MONITORING OF POLICY AND PROJECT IMPLEMENTATION IN THE 1996 BUDGET AND FINANCIAL PROGRAM 1. As part of its monitoring function, the Ministry of Finance will, as from this year, broaden its scope of activities to include the monitoring of policy as well as project implementation. 2. The aim is to improve financial control and management, and ultimately the economy of the country. 3. Selected staff of the Ministry have been assigned the responsibil- ity for monitoring various aspects of the economy and the implementation of MDA projects. In this regard, they will monitor all policies and pay random visits to project sites to report on implementation issues including the following: a. The attainment of project targets b. The physical presence of project outputs for which funds have been released c. The stage of the project for which funds have been requested d. The utilization of project funds for the purpose for which they were released e. The historical development of projects by relating stages of implementation over time and consequent cost effectiveness. 4. Lists of all projects in the 1996 Budget are therefore being developed in three categories, by sector and by location, as follows: a. Interim payment certificates for all certified projects usually supervised by AESC, GHA, PWD, or other consultants, which should include progress reports as in Appendix 1. b. In the case of Donor-assisted projects for which counterpart 198 Hudu Siita funds are released from the Consolidated Fund into project accounts, progress reports based on the format as in Appen- dix 2 will be expected from each implementing agency one month after the end of each quarter. The first reports for this year should cover the first two quarters, but separated accordingly. It will be appreciated if subsequent reports are prepared and submitted quarterly, as indicated above. c. For all non-physical projects, progress reports should clearly indicate the activity, e.g training, workshop, etc., stating the expected outputs or targets, the indicators, means of verifica- tion and cost. Outputs or targets should be measured against cost. 5. It would be appreciated if you could bring the contents of this memo to the attention of all your Departments and Agencies as well as your Policy Planning Monitoring and Evaluation (PPME) Units. 6. Progress Reports should be directed to: The Minister of Finance, (Attn: Victor Selormey), Ministry of Finance, Accra. SIGNED. VICTOR SELORMEY DEPUTY MINISTER OF FINANCE ALL SECTOR MINISTERS ALL REGIONAL MINISTERS cc: The Chief of Staff, Office of the President All Chief Directors All Heads of Departments All Regional Coordinating Directors Project Implementation M&E: Ghana's Experience 199 Appendix 2 PROGRESS REPORTING ON CERTIFICATED PROJECTS. (To be Attached to Each Interim Payment Certificate) 1. Project Title 2. Sector Ministry 3. Project Location: Region District Town/Village 4. Contractor 5. Contract Award Date 6. Expected Completion Date (as per contract) 7. Initial Total Project Cost 8. Budget Provision for the Year 199 9. Previous Payments (Certified) Cumulative to-date Last three certificates: Certificate No ................ Amount. Stage ......%. Date of Certificate Certificate No ................ Amount. Stage ......%. Date of Certificate Certificate No ................ Amount. Stage ......%. Date of Certificate 10. Current Certificate No ...... Amount .... Stagete .. Date of Certificate 11. Precise Description of Identifiable Stage of Work for which Current Payment is Requested. 11. Project Implementation Profile. Original Implementation Schedule Actual Implementation Schedule Reasons for Variance 200 Hudu Siita Appendix 3 QUARTERLY PROGESS REPORTING ON DONOR-SUPPORTED PROJECTS 1. Project Title, Location, Duration, Components and Funding Agencies. 2. Project budget provision for the year. 3. Previous releases for the year. 4. Donor direct funding, i.e., financial transfers into project accounts during the year. 5. Expenditure statements (GOG Development Funds) for the quarter by sector, e.g., Water, health, etc., and by location. 6. Expenditure statements (Donor Funds) for the quarter by sector,* e.g. water, health, etc., and by location. 7. Donor fund disbursements by: - Technical Assistance - Training - Procurement - Direct financial contribution to implementing agencies. 8. List of completed outputs and costs for the current quarter. 9. Implementation Profile. Original Implementation Schedule Actual Implementation Schedule Reasons for Variance * Applicable only to multisector projects. Project Implementation M&E: Ghana's Experience 201 Bibliography National Development Planning Commission. 1997. Vision 2020- The Medium Term Development Plan-1997-2000. Accra. Ghana Statistical Service. 1995. The Pattern of Poverty in Ghana, 1988-1992: A Study Based on the Ghana Living Standards Survey. Accra. 1998. Core Welfare, Indicators (CWIQ) Survey 1997: Main Report. Accra. Ministry of Finance. 1997. Public Expenditure Review 1996: The Delivery of Economic Infrastructure. Accra. ---. 1998. The Budget Statement and Economic Policy of the Government of Ghana for Financial Year 1998. Accra. 202 Hudu Siita 17 Conducting Joint Sector/ Thematic Evaluations Niels Dabeistein Aid evaluation has continuously developed from an early focus on individual activities and projects to broader evaluations of sectors, programs, and cross-cutting issues or themes such as women in development, the environment, institutional development, and sustainability. Most evaluations are concerned with donor-financed activities. Such evaluations provide valuable information on implementation efficiency and on delivering output. More recently, impact evaluations and participatory approaches have gained prominence, and methodologies are being refined. But even the best impact evaluations face difficulties when it comes to establishing causal linkages between individual donor's efforts and developmen- tal changes. The individual donor's evaluations, be they of projects, programs, or sectors, risk ascribing positive development to a limited intervention, when the reality is that development is the result of the synergetic effect of all interventions by the developing countries themselves, as well as the support provided by donors, enhanced or hampered by external factors. The emergence of new types of aid intervention and the inclusion of novel themes in development aid pose new challenges to aid evaluators. Some of these challenges are methodological: how to evaluate support for good governance, human rights, civil service reform, and privatization. Some are wider ranging: the current trend is to move away from project aid to more varied and flexible modes of assistance within a sector framework, frequently address- ing policy and institutional problems; the revitalization of develop- ment cooperation, or partnership; and the substantial and increasing volume of humanitarian relief operations. 203 The growing emphasis on sector assistance or sector program support requires partners, the host country, and the donors to program and implement jointly; evaluations should also be carried out jointly. This will burden the recipient administration less than several individual donors' evaluations, have greater impact on shared lesson-learning and policymaking, and be more cost-effective. The disadvantage may be that it is difficult to identify the effects of individual donors' interventions or share of interventions. I find the advantages in ownership and shared lesson-learning far greater than the disadvantage in reduction of donors' individual accountability. Although joint evaluations do take place, they are still infrequent, and primarily concern donors' jointly financed programs, or have been evaluations of multilateral agencies by groups of donors. Most recently, the Joint Evaluation of Emergency Assistance to Rwanda, coordinated by Danida, was the first major collaborative evaluation of a collective effort. Common to most of these evaluations is the minor role-if any-played by the developing countries in their planning and execution. Rarely are the recipient countries involved until an evaluation scheduled by the donor is initiated, and then the recipient government is usually involved in providing information to the donor, but not in the analyses and final assessment of performance. To ensure that evaluations become efficient learning tools, promote good governance, enable the partners to be fully accountable, and are cost effective, they must be planned and executed jointly with the recipients. To achieve this, two problems have to be dealt with: the lack of credible evaluation institutions in developing countries and the lack of donor coordination. Evaluation institutions exist in many developing countries, but most have little impact on policy and management decisions, partly because there is little demand for independent and transparent evaluation. It appears that credible evaluation is a function of good governance-that is, demand for accountability-more than for 204 Niels Dabelstein evaluation institution and professional capacity development. Although support to building evaluation capacity in developing countries has been on the agenda for several years, and the DAC Expert Group has promoted the concept, through several seminars and through members' direct assistance, progress toward establish- ing credible and capable evaluation functions in developing coun- tries has been slow. In line with the emphasis on development partnership, local owner- ship, and good governance, the donors should use the joint program- ming of sector assistance as a vehicle to assist in developing an "evaluation culture" by building joint evaluations into the sector programs. The value of evaluation as a management tool, as well as an instrument for shared lesson-learning and accountability, could thus be demonstrated. To succeed in this, donors need to coordinate evaluations, and ultimately allow evaluations to be coordinated by the recipient countries. Coordination among donors is notoriously difficult; this is true for evaluations as well. Evaluation programs are prepared in response to agency needs for lesson-learning and accountability, and they are geared to the planning and programming cycle of the agency. Through the DAC Expert Group, donors exchange evaluation pro- grams, and occasionally this leads to joint evaluations-but this is the exception rather than the rule. If evaluation is to develop into a joint management and accountability tool, the donors need to be flexible, to adjust to the planning cycle of the developing countries, to enable the recipient countries to take the lead in evaluations, and to be accountable to their own constituencies and to their clients-the people of the developing countries. Conducting Joint Sector/Thematic Evaluations 205 18 Summary of Presentations and Discussions The purpose of these sessions was to identify institutional resources that could be employed in building evaluation supply and demand to address the problems inhibiting the development of M&E capacity in African countries. These sessions focused on three major areas (resources, partner- ships and participation, and feedback). Participants formed three groups, and each group was to discuss one area. The findings and conclusions of each group were presented in a Plenary Discussion Session. SESSION A: RESOURCES This report deals with the session on Resources, examined by Group A. The group looked into three areas in addressing the issue of resources, with the aim of exploring the possibilities of using them for capacity building. These were: * Twinning arrangements * African'research institutions and universities * The private sector. To set the discussion in motion, the experiences of Norway, the African Capacity Building Foundation, and Ghana were presented by three speakers in the group in the sequence listed here. The pros and cons of these resources are summarized as follows. 207 Twinning arrangements The Norway experience in the North/South twinning arrangements was reported to be ongoing assistance to developing countries since 1990. It aimed at broad-based, long-term institutional strengthening, as opposed to provision of individual training or traditional techni- cal assistance to fill a gap. Although this arrangement is still ongoing, an evaluation of its effectiveness was carried out in 1997. The evaluation identified several key issues. The main points were: * The twinning arrangement was mainly supply-driven. * It provided a level of professionalism that, at times, was ill-fitted to the developing countries. * It transferred solutions that might not fit in the environment of the country concerned. In view of experience, it was underscored that future twinning arrangements should be tailored to suit the specific context of each country and that the need should be identified and mutually agreed upon with respect to the objectives and ways of meeting them. The experience of Norway also indicated that none of the past and ongoing twinning arrangements had been used for capacity building of M&E. But there was a possibility of using it, provided the objec- tives (and ways of achieving them) are well-identified in consulta- tion with both parties. The need to train the policymakers and enhance the capacity retention of the public sector (including creating the enabling environment) was cited to help the use of twinning arrangements, or any other form of donor technical assistance program, to be more effective. It was emphasized that the twinning arrangement should be further explored to better appreciate its usefulness in M&E capacity build- ing. To this end, increased awareness of the institutions in the North that provide such services could help African countries to make use of this resource. 208 Summary of Presentations and.Discussions It was noted that the twinning arrangement should not be considered as an end in itself. It should be used selectively, giving more impor- tance in using local manpower resources (the experience of Tanza- nia) and exploring also the possibility of South/South twinning arrangements. African research institutions and universities The experience of the African Capacity Building Foundation (ACBF) was helpful in exploring the possibility of using research institutions and universities for M&E capacity building. The ACBF was established in 1992 to assist Sub-Saharan countries in developing capacity for policy analysis and development manage- ment. A review of the public administration of most African countries revealed the lack of readiness of the public sector to introduce policy analysis in the formulation of policies and pro- grams. In recent years, however, following structural adjustment programs, many countries have created policy analysis units or used research centers and NGOs to engage in policy studies. To strengthen public administration and induce the demand for policy advice, several workshops have been organized by ACBF over the past few years. However, there are difficulties in getting senior officials from the civil service to engage in research work that may last for three or more months. Nevertheless, it was made clear that research centers and universities could be of use for M&E capacity building. The following reasons can be cited: * Policy or sectorwide evaluation is not always available in the public sector. * Such institutions could play a role in advocacy. * The possibility of capacity retention at such institutions is greater than in the public sector, because these institutions are better funded to provide compatible incentives. Summary of Presentations and Discussions 209 But because these institutions lack practical experience, their recommendations may fail to take account of unavoidable environ- mental constraints. While the benefit of independent evaluation is considered useful, the experience of Germany in disengaging the public sector from creating full-fledged M&E capacity units, protect- ing the public interest, and undertaking joint M&E work is consid- ered to be more effective and realistic because it simultaneously builds M&E capacity in the public sector. Private sector The experience of the Ministry of Finance in Ghana was presented in detail. In the past, attention has centered mainly on monitoring the use of the investment expenditure budget by reviewing expenditures in relation to the stage of project implementation. The monitoring function succeeded in most cases, revealing irregularities, fungibility of funds, and change of project scope without prior approval. In recent years, the enormous number of ongoing projects and their wide distribution over the country, as well as the implementation of policy reforms that encourage privatization, has forced the ministry to contract-out some of the monitoring function to the private sector. In addition, research centers and universities are being encouraged to provide their services in areas such as developing databases on projects under progress. Despite these recent efforts, however, the evaluation function has not yet been tackled. Yet the monitoring function is centered around impact analysis, by taking into account the social sector indicators in the country. While the importance of involving the private sector in M&E cannot be disputed, it has been realized that the existing private sector expertise is more in the area of management control and auditing than in evaluation capacity. At present, there is no way of knowing whether such skills exist in African countries. 210 Summary of Presentations and Discussions The need for networking and taking stock of existing capabilities in Africa was highlighted. The group's presentation and discussions were summed-up as follows in the plenary session: * M&E capacity building could be achieved by using resources available through twinning arrangements, use of research centers and universities, and the private sector. * There is no blueprint model for exploiting such resources. The needs and objectives of each country have to be spelled-out up front, and there should be a great deal of communication between providers and users to ensure success. * There is a need to take stock of available M&E expertise in African countries and to develop appropriate networking to enable its use locally and regionally, in addition to North/South assistance. * There is a need to create an enabling environment, particularly in relation to capacity retention and awareness of the impor- tance of M&E by all concerned. * M&E capacity building should be demand-driven, and local resources and participation should be fully applied to enhance ownership, commitment, and sustainability. SESSION B: PARTNERSHIPS The first presentation was by Mt: Neils Dabelstein, "Conducting Joint Sector/Thematic Evaluations" Mr. Dabelstein explained that country/donor participation/partner- ships have always existed in the provision of development assistance, but this was not carried through in evaluation. The evaluation function continues to be largely donor-driven to meet the require- ments for accountability of donor governments to their own parliaments, taxpayers, and civil society. Joint evaluations have so far been largely confined to the donors themselves. He emphasized that Summary of Presentations and Discussions 211 because the developing countries and the donors also have common objectives for assistance, they should get together to carry out common evaluations. Joint evaluations save money and lessen the burden of evaluation on developing countries (and also have greater validity). Because there is an increasing shift from projects to programs, traditional project evaluations are no longer possible, and joint evaluations should be undertaken at the sector and program levels. Coordination of donor efforts for evaluation presents considerable difficulty. This can be resolved by the recipient country accepting the role of coordinator The donor funding of health sector projects in seven African countries where the same five donors provide most of the financing was mentioned as an example where intradonor coordination in evaluation could take place if the recipient country acted as coordinator. The timing and performance indicators to be used should be established early in the project cycle. Similarly, a common framework and network should be established to facilitate joint evaluations. The second presentation was by Mr. Christopher Raleigh, on "Country/Donor Partnerships." He explained that both the countries and the donors had some ambivalence toward evaluation and selection of indicators. The donors undertake evaluations to provide their parliaments, taxpayers, and civil society with answers to the question: "What difference do donors make to the development process in developing countries?" Partnership implies a common understanding of what needs to be done, and how. It imposes an obligation to continuously discuss the underlying objectives. However, it must be remembered that sometimes the objectives of the partners may differ. The donor community now views its aid interventions in the light of its overall poverty alleviation goal of reducing the number of the poor by half by 2015. The nature of the development endeavour is also changing rapidly, and evaluators should not remain prisoners of the past, but adapt to the new requirements. For example, infrastructure-based 212 Summary of Presentations and Discussions development assistance has given way to the assessments of policy and sectoral reforms, institutional reforms and good governance practices, and impact evaluations. The attribution of benefits and costs in such activities is undoubtedly more difficult. The following were identified as the tasks ahead: * Establish more consistent patterns of monitoring and evaluation. * Develop enhanced capacity to define and handle information. * In donor coordination, the initiative must start with the recipient countries. * Increase the use of local consultants at all stages of the project cycle. * Ensure wider dissemination of evaluation results to civil society. During discussions that followed the presentations, the following issues were raised: D The role of civil servants in joint evaluations was questioned: whether it should be as information providers/facilitators, and as part of their civil service job or as expert evaluators, possibly on a learning curve. The related issue was one of compensation or incentives for civil servants in either role (in most cases these are currently not comparable to those of donor-hired consultants). * Sustainability of M&E systems can be assured by the participa- tion and training of local staff. It is also critical that govern- ments and stakeholders continue to believe in the usefulness and necessity of M&E. The latter can be assured if M&E units produce credible reports, if the benefits of evaluation are visible, and if there is wider dissemination of results. * There should be a consensus approach in the drawing-up of the terms of reference, reporting requirements and selection of consultants so that differences do not arise between donors and Summary of Presentations and Discussions 213 recipient countries-the very recent example of a donor agency in Bangladesh was quoted. SESSION C: PARTICIPATION AND FEEDBACK Three presentations were given in Session C on participation and feedback: * Ann-Marie Fallenius spoke on SIDAs experience assessing stakeholders' participation in evaluations. * K. Opsal presented the role of NGOs in evaluation. * The pitfalls of reporting and dissemination were addressed by R. Gervais' presentation. Two sets of evaluations were analyzed by SIDA in what may be defined as a meta-evaluation: 30 evaluation processes were viewed from the headquarters' perspective, and 9 from the stakeholders' in the field. The questions were not formulated to answer classic evaluation concerns, but covered issues of how and why information was produced and who used it. The study identified many real or potential users, broadly defined as stakeholders, and many types of use, including: * Instrumental use - Conceptual use * Legitimizing use - Ritual use * No use. Stakeholders define their type (or lack) of use according to their interests and needs. Increasing the level of participation then becomes conditional on involvement at all stages, from the beginning to the end of an evaluation. The study's recommendations were presented: 214 Summary of Presentations and Discussions * Evaluations should be designed with use in mind. * Partners should be given access to resources that will enable them to initiate and conduct evaluations independently of the donor. * Evaluation results must be made more accessible. * Participatory evaluation methods should be favored. Participation of NGOs in evaluation processes adds credibility because of their knowledge of local realities and long-term involve- ment. The rapid increase of NGOs in African countries has raised concerns as to which NGOs have built solid credentials. Debates surrounding good governance and transparency have strained relations between governments and some NGOs. Changes are taking place in the funding agencies, and NGOs are now being sought as implementors. But certain weaknesses limit their impact in M&E: * Their institutional capabilities in M&E are limited. * Their financial and managerial capacities to undertake large assignments are limited. * They have low levels of self-sustainability. * There is a risk of conflict of interest. Producing the greatest impact possible when submitting an evalua- tion report may be elusive if reporting and dissemination strategies are not set forth at an early stage in the process. The following suggestions are general, but offer a few of the options that are available: * Present and disseminate preliminary results. D Organize a results presentation seminar with all stakeholders. D Present the report to the media by press release. * Produce multiple versions of the report to allow for an ap- proach tailored to each audience identified. * Build a network for the exchange of best practices. * Use new information technologies. The target audience should be identified to better meet their information needs and their ownership of the report's conclusions. Summary of Presentations and Discussions 215 The decisionmakers should be given a thought-provoking synthesis; other ministries should receive the complete version-the public and media may be given easy access to an administrative summary. Although the need for reporting and dissemination may appear self- evident, they are crucial parts of the success or failure of the evaluation exercise. 216 Summary of Presentations and Discussions Part IV: Options for Evaluation Capacity Development 14 19 Preparing an Action Plan for Evaluation Capacity Development Keith Mackay and Ray Rist Preparation of an action plan for evaluation capacity development (ECD) can be undertaken in two parts. The first involves the preparation of a diagnosis of the existing situation facing a govern- ment-either at the overall government level or at the level of an individual sector, if that is the focus of attention. The second part is the preparation of an ECD action plan-a concrete list of actions to be pursued, tailored carefully to the particular circumstances and opportunities in the country. (A suggested outline for preparation of an action plan is presented in the Annex.) Part I-diagnosis Eight separate but related steps are set out below for undertaking a country or sector diagnosis (Mackay, 1999): 1. Identify key ministries and other bodies-these organizations are the key stakeholders in the government's approach to performance management. They may or may not currently be involved in performance measurement (broadly defined, and including evaluation). Another component of this step is an analysis of the formal, stated functions of these organizations and their formal interrelationships. 2. Diagnose the public sector working environment of individual ministries and public servants. This diagnosis encompasses issues such as public sector ethics and incentives, and possible corruption. The public sector environment is important to the extent that it influences the way in which elements of the government operate, as well as their interrelationships and performance. This includes an analysis of the rules systems and incentives that shape the behavior of public servants. 219 3. Develop an understanding of the factors that influence budget decisionmaking and line management decisions at the level of individual ministries. Identify the actual-as distinct from the formal, stated-functions and the degree of autonomy of key central and line ministries. Together, steps 1 and 3 provide a contrast between the stated and the actual processes of decisionmaking within government, both at the whole-of- government (budget) level and at the level of individual, key ministries. 4. Determine the extent of the demand within government for measuring the performance of government activities. This entails asking whether, to what extent, and how evaluation influences budget decisionmaking and line management within individual ministries. 5. Assess the evaluation activities and capabilities of central and line ministries and other organizations (such as universities, research institutes, and the like). This step relates to the extent of the supply of evaluation, including an assessment of pro- cesses and systems to make that information available-that is, the information infrastructure. 6. Investigate the evaluation activities of multilateral and bilateral development assistance agencies as they affect the country in question. These agencies can have a powerful influence on governments through the loans, grants, and advice they provide. This influence may be manifested via the formal evaluation requirements of the development agencies, and through the advisory services they provide. 7. Identify major public sector reforms in the government in recent years, as well as reforms in prospect, especially changes that might affect performance management and measurement. These reforms can provide opportunities to pursue ECD. Together, these seven steps help in the identification of opportunities and limitations in pursuing ECD-this final identification of options is the eighth and final step. 220 Keith Mackay and Ray Rist Part II-action plan (step 8) In developing an action plan, it is important to consider the possible uses of evaluation findings including: * They can be included as an input into government or sectoral resource allocation-that is, government's decisionmaking and prioritization, particularly during the budget and planning processes. * They can serve as an aid to line management, fostering results- based management at the sector, program, and project levels. * They can support clearer accountability for performance. Consideration of these possible uses of evaluation findings ensures a strong focus on what an evaluation system is intended to achieve. What does success look like if the ECD efforts are successful? There is no single "correct" way to prepare an action plan, but it is helpful to consider explicitly the demand for and supply of evalua- tion findings. On the demand side: * What are the current sources of demand; how strong is demand? * It is crucial to win hearts and minds-that is, to build demand and create consensus. How can this be done? How is one to encourage the government-ministers and officials-to become committed to evaluation? * Is there any scope to arrange for the creation of ministerial decrees, regulations, and the like relating to evaluation? * What types of data and evaluations are currently available? * Which types of evaluation tools would be in demand? Would these include basic socioeconomic statistics; accounting data on costs; monitoring and performance indicators; surveys of clients and citizens; program and project evaluations (ex ante, Preparing an Action Plan for Evaluation Capacity Development 221 efficiency, and ex post); performance/efficiency audits? Con- sider the actual and potential uses of each (to measure inputs, outputs, and outcomes, for example). * The availability of more evaluation findings may create winners and losers among different ministries. Who are they? * Which ministries or agencies-and which parts of them- could be regarded as evaluation "champions"' or could be encouraged to become champions (for example, a finance or planning ministry, a national audit office)? * Which ministries might be hostile to evaluation, becoming roadblocks? * How would support of powerful central and line ministries be acquired? * How could support be gained from ministers and ministries involved in other public sector reforms? * How could a program "win hearts and minds" in the civil service? * What is the possible demand from NGOs, the media? The supply side is also important: * In the past, what have been the sources of the different types of evaluation tools (listed above)? What is the quality and availability of socioeconomic data on government inputs and outputs? D How can the supply of evaluation skills be broadened and deepened? * What are the possible sources of training for staff-universi- ties, research institutions, civil service commission, consulting firms, development assistance agencies, twinning arrangements/ secondments, and the like? * How many people do you think should be made available to work on monitoring and evaluation? What would their qualifi- cations and skills need to be? * What types of evaluation tools would they need to master? Bear in mind the tradeoff between simplicity and depth of under- standing (the latter involves cost). 222 Keith Mackay and Ray Rist * How much funding would be necessary/available? At what levels-national, sector, major projects-do you wish to develop evaluation capacity? * There is a possible role for a trial approach-to provide a good-practice demonstration model and to serve as a basis for replication in other sectors or areas. What types of evaluation infrastructure do you need? These would include planning mechanisms to decide which evaluations to conduct, and mechanisms to ensure that evaluations are conducted in a timely manner, and their results provided to those who commis- sioned them. What support might you need from development assistance agencies, and at what stages in the ECD process? * Such support might include advice, expertise, organization of seminars, training, loans and grants, identification of qualified consultants, support for local/regional universities and research institutions, preparation of guidance material (including case studies), and greater harmonization of donor evaluation requirements. Consider issues of timelines, sequencing, and speed of implementation: * It may be difficult to make plans into the distant future, and it may be prudent to retain some flexibility in approach and to take advantage of future opportunities as, and when, they arise. * But it is very worthwhile to set indicative targets for given time periods, such as 1, 3, 5, and 10 years in the future How sustainable would the evaluation capacity be? What might be some risks and threats to sustainability? Preparing an Action Plan for Evaluation Capacity Development 223 When mapping out an action plan, it may be useful to focus on a small number-5, for example-of the main ECD initiatives that are considered feasible, and to flesh them out with as much detail as possible. In identifying ECD options, it is useful to consider the relativle and absolute strength of demand and supply. A simple approach is to reach a judgement about whether demand for evaluation findings is strong or weak, and whether the supply of evaluation findings is strong or weak. (This approach is certainly something of an oversimplification-there are many dimensions to demand and supply.) A large number of ECD options can be designed for each of these four types of "corner" environments. The list following offers options that might be worth particular consideration (a longer list is shown in NMackay, forthcoming). Strong demand, strong supply * Organize and systematize the evaluation function. * Support evaluation of policies, programs, and projects. * Establish links among evaluation, strategic planning, resource allocation, and budgets. * Strengthen evaluation in the legislature (parliament). 224 Keith Mackay and Ray Rist Strong demand, weak supply * Support financial and information (monitoring) systems. * Disseminate lessons of experience and evaluation best practice. * Train and use private sector organizations in evaluation. * Introduce evaluation in universities through short courses or seminars. Weak demand, strong supply * Mandate evaluation by decree and regulation. * Link evaluation to ongoing public sector management (gover- nance) reforms. * Link evaluation to public expenditure management (budget system) reform. * Organize seminars for senior officials to help build consensus. Weak demand, weak supply * Strengthen audit and accounting. * Disseminate national and international lessons of experience. * Support evaluation training in educational institutions. * Raise awareness among decisionmakers. Preparing an Action Plan for Evaluation Capacity Development 225 Annex: Suggested Outline of an ECD Action Plan Part I-diagnosis * Complete the reviews of the existing situation-at the national or sectoral level-on the basis of Steps 1 -7. Part II-action plan * Uses of evaluation findings * Demand * Supply * Levels * Evaluation infrastructure * Support from development assistance agencies - Timelines, sequencing, speed of implementation * Sustainability * Main ECD initiatives. 226 Keith Mackay and Ray Rist Reference Mackay, K. Evaluation Capacity Development: A Diagnostic Guide and Action Framework. OED Working Paper No 6. Washington, D.C.: World Bank. Preparing an Action Plan for Evaluation Capacity Development 227 20 Options for Evaluation Capacity Development On the third day of the seminar-workshop, the country groups translated the discussions of the previous days into a proposed action plan for their specific countries. This exercise aimed at grounding evaluation capacity development into national realities by: * Having country delegates establish a diagnosis of conditions in their country * Elaborating a strategy to enhance evaluation capacity develop- ment and improve monitoring and evaluation activities. The action plans have not been agreed to by the governments of the 1 countries-they reflect the considered judgments of each country's officials who attended the seminar, concerning what appears feasible for their countries. Eleven country groups submitted action plans: - C6te d'Ivoire D Ethiopia * Ghana D Guinea * Malawi * Morocco D Mozambique * South Africa * Tanzania * Uganda * Zimbabwe. 229 The country groups analyzed situations of weak (even very weak) demand and supply for M&E activities. Recurrent themes of their diagnosis can be identified. * Several groups said that the legislative or administrative frameworks were not adapted to the needs of M&E activities (Guinea, Morocco, Uganda). They did not recommend its expansion to new sectors, nor did they offer any encouragement or adequate protection to civil servants in the field (Malawi). Although some countries have built a core evaluation sector, weak coordination between ministries (Ghana, Guinea, Mozambique) hampered its development and the building of a credible accountability system. * Most groups noted that training had been the major constraint to the emergence of a reliable supply of local expertise (Zimba- bwe, Ethiopia, Malawi). * Without proper budget allocations, M&E becomes an empty shell, another case of wishful thinking (Guinea, Tanzania). Selected examples from proposed action plans are noted in the following section. Institutional reforms * The existence of political will does not always result in a solid body of rules and regulations concerning M&E activities (Guinea). "Mandating evaluation by decrees and regulation" (Malawi) would help ECD. * The absence of an administrative framework creates ambiguity (Morocco) when joint evaluations cross ministry borders, so that the "establishment of rules and procedures organizing the evalua- tion function" (Morocco) appeared to many as a means of "institu- tionalizing evaluation at national and district levels" (Ghana). * Numerous groups submitted that evaluation had to be clearly linked with good governance issues (South Africa, Tanzania) and public sector reform (Tanzania, Malawi). To ensure 230 Options for Evaluation Capacity Development Box 20.1: Summary of initiatives presented] in the proposed action plans Major initiatives Number of action plans with the initiative Institutional reforms Adoption and application of appropriate legislation, as well as production of manuals, guid es, and other tools 6 Building of consensus, awareness and acceptance of M&E activities in kev decision centers S Transparency rules applied to evaluations, enhancing accountability Human resources development Training of trainers, officials, and technicians involved in WI&E 10 Participation in joint evaluations with external fundingagencies I Creation ofa network of evaluators to facilitate exchanges 3 Resources management Allocation of resources for M&E 4 Creation of a database of information from.NI&E operations, with proper management tools to disseminate best practices 9 Options for Evaluation Capacity Development 231 transparency, it was suggested that "such evaluation results will be publicized through parliament and the media" (Malawi). Human resources development * Virtually all groups included some form of a training compo- nent to meet the increased demand for M&E. Modalities were varied, but the results were identical: a significant increase in the knowledge and practice of evaluation. Some plans included private sector participation in training sessions (Uganda, Ethiopia, Tanzania). For the group from Tanzania, "support (for) the existing academic and research institutions and private consulting firms" was key to the strengthening process. * Breaking the barriers inside and outside their respective countries by creating a network of evaluators was viewed as an important element in ECD (Morocco, Tanzania, Uganda, Ethiopia). Resources management * Voting or legislation is no substitute for the allocation of funds. Some groups said that lack of means was a major constraint to ECD (Malawi, Guinea, Tanzania). * "Strengthen database and management information systems for effective evaluation" (Ghana) and other similar proposals pointed to the participants' appreciation that information produced by an M&E system required careful collection, management, and analyses to extract both the lessons learned and the best practices. 232 Options for Evaluation Capacity Development Pa rt V: Challenges and Prospects 21 Wrap-Up Session Gabriel M. B. Kariisa This is a follow-up seminar to the one organized by the Bank in 1990. The purpose of the current seminar is to provide African country teams with an understanding of the critical linkages between evaluation and pubic expenditures management; to provide suffi- cient knowledge and skills for them to develop country-specific preliminary action plans for M&E systems, and to create country networking for follow-up work. As we recall, in the opening session, the Vice President of the ADB emphasized the importance of M&E in African countries to maxi- mize the use of scarce resources. He underscored the need for the establishment of M&E capabilities. He also stressed the importance of this seminar and the expected outcome, which should be realistic and should be followed by achievable action plans. The Honorable Minister from C6te D'Ivoire spoke of his country's ongoing reform process and the importance of M&E in the effort to fullfil the government's responsibility toward the citizens and the taxpayers. Cote D'Ivoire is one of the few countries in Africa that has gone ahead with the establishment of M&E functions and their regrouping (M&E and planning functions under one agency) to ensure effective feedback. He hoped that the seminar would produce a set of actions for each country, and suggested a system of networking among African countries. I see this seminar as an opportunity, first, to develop an action plan for establishing an appropriate M&E capability in African countries in collaboration with the donor community and, second, to develop networking for future contacts and exchange of views and ideas. As you recall, the seminar kicked-off with a discussion of the experiences in Africa. In the past, the concern was more on public 235 fund work and the identification of deviations from approved procedures. In recent years, particularly following structural adjustment programs, the need for accountability, transparency, and good governance have became increasingly important in fulfilling the expectations of civil society as well as satisfying the conditionali- ties of donor agencies. Such change of focus has invariably rein- forced the concept and importance of M&E in overseeing the functioning of the public sector The level of M&E capacity development, however, varied from country to country as illustrated by the participants' presentations. In some, available capacity and institutional infrastructure still limit the development of this important function. Some of the inhibiting factors cited include: * M&E has been supply-driven rather than demand-driven. * A participatory approach has been lacking. * Skills and logistics have been lacking. * There has been a lack of appreciation of the usefulness of M&E for policy, program, sectoral, and project formulation. * There has been a lack of incentives to enhance participation, ownership, transparency, and good governance. The presentation on the multilateral perspective showed that evaluation has so far remained a Cinderella, but its importance was now increasing with the rising demand for accountability. It is now driven by the demand side, mainly because of the need for good governance. On the supply side, developing countries' capacity in M&E is fragmented and needs to be harmonized. The mismatch between supply and demand needs to be eliminated. The prospects are much more favorable today. The World Bank presentation enlightened us with its current strategies, which emphasize quality of partnership and evaluation. It is interesting to note the commitment of the World Bank to focus on building M&E capacity in Africa in the next year or two by reengineering its country strategies, and using a country approach as 236 Gabriel M. B. Kariisa opposed to micro institutions. The evaluations would focus on the impact effectiveness of structural interventions and social and long- term development programs. The Latin American experience expressed the importance attached to the level of commitment to M&E and its placement, which sometimes has been as high as the presidential level, as shown from the example of Costa Rica. Three types of evaluation models in Latin America were explained. The importance of the current information revolution and its crucial role in M&E development was also high- lighted. The experience of African countries with respect to the linkages of fiscal management and M&E was shared. Experience seems to suggest that the effort has mainly been limited to project-level monitoring activities, rather than overall evaluation for appropriate feedback to reformulate policies and programs or to develop transparency, accountability, and ownership. Such shortcomings relate to lack of expertise and commitment from both the demand and the and supply side. Arising from the discussions on present capacity and initiatives in regard to M&E systems in African countries, a number of issues were identified for discussion in the three country groups. Some of these issues include: * Weak government commitment * Lack of awareness of benefits * A civil service based on operational culture * Human resource constraints in quantity as well as quality * Lack of compatible incentives * Inadequate budgetary allocations * Unreliable databases * Lack of guidelines * Lack of performance indicators * Lack of equipment and networking. Wrap-Up Session 237 This morning, we discussed resources, partnerships, and participa- tion and feedback, with some excellent presentations from the multilateral and bilateral donors. Our aim was to identify sources of human, material, and financial resources for building evaluation supply and demand to address current issues and problems. One form of resource identified was twinning arrangements and the possibility of using this to build M&E capacity. The experience of Norway brought forward the added advantages (sharing experience, continuity, and sustainability) of such arrangements over the individual training or technical assistance traditionally used to fill a gap. The use of research centers such as the African research institutions and universities and the private sector, independently or jointly with user-ministries, was considered a positive contribution to capacity building, as well as to enhancing the credibility of evaluations. On the issue of donor coordination, there was almost general agreement that this should be done by the developing countries themselves. The donor partnership, or its lack, in the development of M&E capacity; the problems, the ways to move forward by beginning with intradonor coordination and identification of sectors for evaluation; the increasing use of local consultants; and the role of civil servants in M&E exercises generated considerable discussion. The session on participation and feedback was very instructive. Participation (induding the NGOs) was considered a key area in achieving credibility and acceptance of results and lessons, even though it may involve additional costs. In disseminating the results of evaluation, the needs of transparency and sensitivity must both be kept in view. A novel approach to oral dissemination used in South Africa generated considerable interest. This evening, we have had some very interesting presentations about the application of a framework for ECD. The identification of the 238 Gabriel M. B. Kariisa eight steps has captured the substance of the experiences of the African countries and provided a vital key to move on to the next step. The follow-on step 8 in the very prescient 2X2 format to map opportunities and options for ECD has generated highly informed discussions in the country groups. We have just heard the conclu- sions of these discussions. Our step-by-step approach, and the full participation of both donees and donors since yesterday, has now brought us to a stage from which we can take off and develop action plans tomorrow. We will have the benefit of another presentation by the World Bank on the general approach to the development of an action plan. With all this wisdom and the benefit of experience of peer countries and donors, we can confidently launch into our country action plans. Our sights are now clear. We must remember that the individual country situations and prospects of donor support must be factored- in strongly. Wrap-Up Session 239 22 The Road Ahead Robert Picciotto The 1998 Abidjan seminar-workshop offered a glimpse of the ever- changing landscape of public sector performance evaluation in Africa. It allowed participants to better grasp what has been attained since the 1990 meeting. The changes have been important: * Africa is experiencing a new development momentum. * Development management is rapidly adopting results-focused methods. * The greater role of civil society brought to the forefront issues of good governance and more effective public administration. * The links between public expenditures and M&E approaches have intensified. * To sustain these transformations, new and more stable forms of partnership were developed between funding agencies and their counterparts. - Funding agencies have agreed to assist in mounting pilot projects in Africa to identify best practices. Discussions and exchanges during the seminar have sent a clear signal that the trend toward more responsive, accountable, and efficient government is lasting and will influence ftuture development strategies. They also underlined that not all countries were at the same level of institutional development in ECD; that each country followed its own path according to its administrative culture. This requires that a step-by-step approach be adopted, one tailored to the needs and stage of each country. A diagnosis of demand and supply for evaluation should be initiated in partnership to identify an action plan. The proposed action plans, although not officially endorsed, did isolate important recurrent themes: 241 * Institutional support, both inside and outside Africa, for ECD is seen as crucial. Whether it is improved legislation; a new set of administrative guidelines for evaluation; increased awareness to the problems of national governments, donors, or interna- tional organizations, every avenue must be used. It may well be that there is a need for action plans devised and programmed by African governments, but that an "Action Plan for Donors" may also help sustain the momentum. * Training support in Africa (on M&E or on evaluation concepts, methods, and practices) appeared as the backbone for any ECD program. * Different databases were suggested: (1) one with evaluators (practitioners, consultants, government officials in charge of M&E, auditing boards, private sector firms), which could be the first step toward the creation of an African Evaluation Society; (2) another that would collect and make accessible lessons learned and best practices of M&E operations. The road ahead is complex and full of important challenges. Con- straints to the development of a demand for evaluation capacity must be lifted; coordinated efforts at the countrywide level may help resolve the obstacles to the supply of effective, credible, and trained expertise. So far, harmonization in ECD strategies has not been adequate: the match between demand and supply has been lacking. Concerted efforts to resolve this and other problems are needed to pave our road to the future. 242 Robert Picciotto Annexes I List of Participants BURKINA FASO 5. M. Semon BAMBA Sous-Directeur Charge des Finances 1. Son Excellence, M. Hamidou Pierre Publiques WIBGHA Ministere de la Planification et de la Ministre De1lgue Charge des Programmation du Developpement Finances 22 BP 1290 Abidjan 22 02 BP 5573 Tel: (225) 22 25 38/21 24 48 Ouagadougou Fax: (225) 22 30 24 Tel: (226) 32 48 08 Fax: (226) 32 43 60 6. Mme. Josther KOUAKOU Chef de la Section Instrument PPBS 2. Dr. Blaise Antoine BAMOUNI au BNETD Directeur de la Medecine Preventive Ministere de la Planification et de la Ministere de la Sante Programmation du D6veloppement 09 BP 279 Ouaga 09 04 BP 945 Abidjan 04 Tel: (226) 33 47 28 1 36 23 57/20 02 07 Tel: (225) 44 69 26/44 20 51 /48 81 86 Fax: (226) 32 49 38 Fax: (225) 44 56 66/48 71 28 E-mail: blaise.bamouni@sante.gov.bf E-mail: jkouakou@Bnetd.sita.net C(TE D'IVOIRE ETHIOPIA 3. M. Alexandre ASSEMIEN 7. Mr. Ato Asrat KELEMEWORK Directeur du Plan Head, Budget Department Ministere de la Planification et de la Ministry of Finance Programmation du Developpement P.O. Box 1905 Tel: (225) 22 30 24 Addis Ababa Fax: (225) 22 30 24 Tel: (251) 1 125 305 Fax: (251) 1 551 355 4. M. Victorien 0. DERE Direction du Plan GHANA Ministere de la Planification et de la Programmation du Developpement 8. Mr. Hudu SIITA Tel: (225) 22 25 35 Head, Project Implementation Fax: (225) 22 30 24 Monitoring Unit, Ministry of Finance 245 P. O. Box MAO GUINEA Accra Tel: (233) 21 66 41 31 12. M. Mamadou BAH Fax: (233) 21 66 38 54 Directeur National, Programmation E-mail: mofpad4@ncs.com.gh pluri-annuelle Ministere du Plan et de la 9. Mr. Isaac ADAMS Cooperation Head, Information Monitoring & BP 221 Conakry Evaluation Unit Tel: (224) 41 34 95 Ministry of Health Fax: (224) 41 30 59 ou 41 55 68 P.O. Box M44 Accra 13. M. Mamadou Sombili DIALLO Tel: (233) 21 31 86 12 Chef de Service Suivi-Evaluation Fax: (233) 21 78 02 78 Ministere du Plan et de la E-mail: moh- Cooperation ime@africaonline.com.gh BP 221 Conakry Tel: (224) 41 34 95 ou 4155 68 10. Mr. Ohene OKAI ou 41 26 67 Director, Policy Planning, Budgeting, Fax: (224) 41 30 59 Monitoring & Evaluation (PPBME) Ministry of Works and Housing 14. Dr. Boubacar SALL P.O. Box M43 Chef de Section, Planification & Accra Evaluation Tel: (233) 21 66 54 21, Ext. 2068 Ministere de la Sante Fax: (233) 21 66 68 60 BP 585 Conakry 11. Mr. Patrick DONKOR 15. M. Aboubacar DEM Deputy Director and Head of the Chef de Service, Suivi-Evaluation Monitoring and Evaluation Division, Ministere de la Pche et de l'Elevage National Development Planning BP 307 Conakry Commission Tel: (224) 41 12 58 ou 45 19 26 P.O. Box C 633 ou 41 43 10 ou 41 52 30 Cantonments Fax: (224) 41-43-10 ou 45-19-26 Accra Tel: (233) 21 77 30 11/3 or 28 21 26 44 16. M. Kissi Kaba OULARE Fax: (233) 21 30 55 46 Chef de Service, Suivi-Evaluation des 246 List of Participants Projets routiers Ministry of Finance Ministere de l'Equipement P.O. Box 30049 BP 438 Conakry Lilongwe 3 Tel: (224) 45 45 16 Tel: (265) 78 21 99 Fax: (265) 78 16 79 17. M. Joseph Emile FOULAH Charge de Projets a l'Administration MOROCCO et Contr6le des Grands Projets (ACGP) 21. M. Abdelaziz BELOUAFI Presidence de la Republique Chef de la Division du Suivi et de Quartier Boulbinet l'Evaluation BP 412 Conakry Direction de la Programmation et Tel: (224) 41 52 00 ou 45 39 90 des Affaires Economiques (DPAE) ou 41 45 17 Ministere de l'Agriculture Fax: (224) 46 16 18 Km 4, Route de Casablanca Rabat MALAWI Tel: (212) 7 69 03 98 ou 69 02 00 Fax: (212) 7 69 84 01 18. Mr. Patrick KAMWENDO Economist 22. M. Kaddour TAHRI Ministry of Finance Directeur du Centre National P.O. Box 30049 d'Evaluation des Programmes Lilongwe 3 (CNEP) Tel: (265) 78 48 24 or 82 62 09 Ministere de la Prevision Fax: (265) 78 16 79 Economique et du Plan 26, Rue Jbal Ayachi-Agdal 19. Mr. Wilfred R. CHINTHOCHI Rabat Deputy Secretary (Administration) Tel: (212) 7 67 09 47 Ministry of Finance Fax: (212) 7 67 18 84 P.O. Box 30049 E-mail: Kadtahri@mpep.gov.ma Lilongwe 3 Tel: (265) 78 21 99 or 78 32 48 23. M. Abderrahmane HAOUACH Fax: (265) 78 16 79 Directeur Adjoint, Centre National d'Evaluation des Programmes 20. Ms. Chrissie CHILIMAMPUNGA (CNEP) Economist Ministere de la Prevision List of Participants 247 Economique et du Plan MOZAMBIQUE 26, Rue Jbal El Ayachi-Agdal Rabat 27. Senhora Ivone AMARAL Tel: (212) 7 67 32 86 Head, Rural Water Department Fax: (212) 7 67 18 84 National Directorate for Water Av. Eduardo Mondlane N°. 1392 4° 24. M. Fouad SAMIR Andar Chef de la Division des Relations et Maputo Financements Multilateraux Tel: (258) 1 43 02 03 or 42 32 69 Direction du Tresor et des Finances Fax: (258) 1 43 01 10 Exterieures E-mail: PRONAR@DMAM.VEM.MZ Ministere de l'Economie et des Finances 28. Senhor Miguel MAGALHAES Tel: (212) 7 76 08 72 Head, Department of Planning & Fax: (212) 7 76 49 50 Investment National Directorate for Water 25. M. Hassan EL RHILANI Av. 25 De Septembro 942 Chef de la Division de la P.O. Box 1611 Programmation Maputo Direction des Batiments et de Tel: (258) 1 30 80 13 l'equipement Fax: (258) 1 42 07 43 or 42 14 03 Ministere de l'Education Nationale 3, Avenue Ibn Sina Agdal 29. M. Alberto C.A. VAQUINA Rabat Directeur Provincial de Sante Tel: (212) 7 77 09 59 Ministere de la Sante Fax: (212) 7 77 02 82 Direccao Provincial de la Senide De Cabo Delgado C.P. 4 26. M. Ahmed EZZAIM Pemba Chef de la Division des Etudes Tel/Fax: (258) 72 22 13 Ministere de l'Equipement Rabat-Chellah 30. M. Tiago MACUACUA Tel: (212) 7 76 50 41 Directeur Provincial de la Sante Fax: (212) 7 76 33 50 Ministere de la Sante E-mail: ZAIM@MTPNET.GOV.MA Avenue 24 de Julho 1507-12°E Maputo 248 List of Participants Tel: (258) 1 42 75 66 ou 2 22 54 67 34. Mr. Benedict B. JEJE Fax: (258) 2 22-54-67 Director of Nutrition Policy and Planning 31. M. Ricardo A. TRINDADE Tanzania Food and Nutrition Centre Directeur National Adjoint du 22 Ocean Road Personnel P.O. Box 977 Ministere de la Sante Dar-Es-Salaam Av. Eduardo Mondlane - C.P. 264 Tel: (255) 51 118-137/9 Maputo Fax: (255) 51 116-713 Tel/Fax: (258) 1 42 2159 E-mail: tfnc@costech.gn.apc.org SOUTH AFRICA UGANDA 32. Mr. Indran NAIDOO 35. Mr. Michael WAMIBU Director, Monitoring and Evaluation Senior Economist, Ministry of Department of Land Affairs Finance, Planning and Economic Private Bag X833 Development Pretoria, 0001 P.O. Box 8147 Tel: (27) 12 -312-8250 Kampala Fax: (27) 12- 323-8041 Tel: (256) 41 25 70 90 E-mail: ianaidoo@,ghq.pwv.gov.za Fax: (256) 41 25 00 05 or 23 01 63 or 34 13 97 TANZANIA E-mail: macro@imul.com 33. Mr. Elikunda E. MSHANGA 36. M. F. X. K. WAGABA Assistant Director in the Public Decentralization Secretariat Investment Ministry of Local Government Division of the Planning P.O.Box 7037 Commission Kampala P. O. Box 9242 Tel. (256) 41 25 08 76 or 34 62 04 Dar-Es-Salaam Fax: (256) 41 25 08 77 Tel: (255) 51 112-681/3 or 117-101/2 E-mail: decsec@imul.com or 130-314 Fax: (255) 51 115519 List of Participants 249 ZIMBABWE Monitoring and Implementation Department 37. Mr. Michael S. SIBANDA Office of the President and Cabinet Deputy Director, Domestic & Private Bag 7700, Causeway International Finance Harare Ministry of Finance Tel: (263) 4 70 44 25 Private Bag 7705, Causeway Fax: (263) 4 72 87 99 Munhumutapa Building, Samora Machel Ave. DAC MEMBERSIMEMBRES DU CAD Harare DAC/OECD-CAD/OCDE Tel: (263) 4 79 45 71 or 79 65 78 Fax: (263) 4 79 27 50 or 79 65 63 41. Mr. Raundi Halvorson-Quevedo Administrator, Strategic 38. Ms. M. MANDINYENYA Management and Development Deputy Director Cooperation Division National Economic Planning Tel: (331) 45 24 90 59 Commission (NEPC) Fax: (331) 45 24 19 96 Office of the President & Cabinet E-mail: raundi.halvorson-quevedo Private Bag 7700 Causeway @oecd.org Harare Tel: (263) 4 73 47 89 or 79 61 91 DENMARK Fax: (263) 4 79 59 87 42. Mr. Niels DABELSTEIN 39. M. B. MUTANGADURA Chair, DAC WPAE/Head of Evaluation Manager, Distribution Services Secretariat, Danida Zimbabwe Electricity Supply Ministry of Foreign Affairs Authority 2, Asiatisk Plads P.O. Box 377 DK 1448 Copenhagen K 25 Machel Avenue Tel: (45) 33 92 00 39 Harare Fax: (45) 33 92 16 50 Tel: (263) 4 77 45 08 E-mail: niedab@um.dk Fax: (263) 4 77 45 42 E-mail: boniface@ZESA.CO.ZW EUROPEAN COMMISSION 40. Mr. Nicolas Dlamini KITIKITI 43. Ms. Susanne B. WILLE Director, Economics Section, Evaluator, SCR Service Evaluation 250 List of Participants Commission Europeenne 75700 PARIS 07 SP 200 Rue de la Loi Tel: (33) 1 53 69 43 48 1049 Bruxelles Fax: (33) 1 53 69 43 98 Tel: (32) 2 29 55 200 or 1 44 49 75 41 Fax: (32) 2 29 92 912 E-mail: E-mail: Susanne.Wille@SCR.CEC.be michael.ruleta@cooperation.gouv.fr FRANCE GERMANY 44. Mme Monique HUGON 47. Ms. Dorothea GROTH Ministere des Affaires Etrangeres Deputy Division Chief 244 Bd. St. Germain Evaluation Division 75007 Paris Federal Ministry for Economic Tel: (33) 1 43 17 80 02 Cooperation and Development Fax: (33) 1 43 17 85 17 Friedrich - Lebert -Allee 40 E-mail: 53113 Bonn monique.hugon@diplomatie.fr Tel: (49) 228 535 3122 Fax: (49) 228 535 3815 45. Mme Anne-Marie CABRIT E-mail: Groth@bmz.bund400.de Responsable Unite d'evaluation Direction du Tresor INTERNATIONAL MONETARY FUND Ministere de l'Economie et des (IMF) Finances 139, Rue de Bercy, Teledoc 649 48. M. Pierre Ewenczyk 75572 Paris, CEDEX 12 Resident Representative in Tel: (33) 1 44 87 73 06 C6te d'Ivoire Fax: (33) 1 44 87 71 70 Tel: (225) 21 90 46 E-mail: anne-marie.cabrit Fax: (225) 22 02 62 @dt.finances.gouv.fr IRELAND 46. M. Michael Ruleta Mission d'Etudes, d'Evaluation et de 49. Dr. Brian WALL Prospective (MEEP) Chief, Project Analyst Evaluation & Ministere des Affaires Etrangeres, Audit Unit Cooperation et Francophonie Department of Foreign Affairs 20, Rue Monsieur 76-78 Harcourt Streeet List of Participants 251 Dublin 2 53. M. Stein-Erik KRUSE Tel: (353) 1 408 2613 Consultant Fax: (353) 1 408 2626 Center for Partnership in Develop- E-mail: Brian.Wall@iveagh.irlgov.ie ment (DIS) P.O. Box 23 JAPAN 0319 Oslo Tel: (47) 22 45 18 11 50. Mr. Yukihiro NIKAIDO Fax: (47) 22 45 18 10 Director of Evaluation E-mail: Kruse@dis.no Division Economic Cooperation Bureau PORTUGAL Ministry of Foreign Affairs Tel: (81) 3 35 92 83 42 54. Ms. Maria M. GOMES AFONSO Fax: (81) 3 35 93 80 21 Evaluation Team Instituto Da Cooperassao 51. Ms. Aki MATSUNAGA ICP Assistant to the Resident R.D. Francisco MANUEL Representative De MELO N° 1, 20 D1O JICA (Japan International Coopera- 1070 Lisboa tion Agency-Abidjan Office) Tel: (351) 1 381 2780 04 BP 1825 Fax: (351) 1 387 7219 Abidjan 04 E-mail: Tel: (225) 22 22 90 pad.instcoop@mail.telepac.pt Fax: (225) 22 22 91 SWEDEN NORWAY 55. Ms. Anne-Marie FALLENIUS 52. Mr. Helge KJEKSHUS Director, Head of Evaluation and Special Advisor Internal Audit Policy Planning & Evaluation Swedish International Development Division Cooperation Agency (SIDA) Royal Ministry of Foreign Affairs Tel: (46) 8 698 5441 7 Juni Plassen/Victoria Terrasse Fax: (46) 8 698 5610 P.O. Box 8114 E-mail: anne-marie.fallenius@sida.se N-0032 Oslo Tel: (47) 22 24 35 14 Fax: (47) 22 24 95 80 252 List of Participants SWITZERLAND UNITED KINGDOM 56. Ms. Catherine CUDRE-MAUROUX 59. Mr. C. RALEIGH Adviser, SDC Head, DFID Evaluation Department Bretton Woods Division Tel: (44) 171 917 0545 Eigerstrasse 73 - E 417 Fax: (44) 171 917 0561 3003 Bern E-mail: c-raleigh@dfid.gtnet.gov.uk Tel: (41) 31 323 5096 Fax: (41) 31 324 1347 THE WORLD BANK E-mail: OPERATIONS EVALUATION catherine.cudre@deza.admin.ch DEPARTMENT 57. Mme Beatrice SCHAER 60. Mr. Robert PICCIOTTO Adviser, SDC Director-General, Operations Multilateral Development Division Evaluation Department and Chair, Eigerstrasse 73 MDB Evaluation Cooperation Group 3003 Bern Tel: (1) 202 458 4569 Tel: (41) 31 324 1459 Fax: (1) 202 522 3200 Fax: (41) 31 324 1347 E-mail: rpicciotto@worldbank.org E-mail: beatrice.schaer@deza.admin.ch 61. Ms. Linda MORRA Coordinator, Training, Partnership, THE NETHERLANDS ECD World Bank-OEDPK 58. Mr. Ted KLIEST G6-117 Senior Evaluator 1818 H Street N.W. Policy and Operations Evaluation Washington, D.C. Department (IOB) Tel: (1) 202 473 1715 Ministry of Foreign Affairs Fax: (1) 202 522 1655 P.O. Box 20061 E-mail: lmorra@worldbank.org 2500 EB The Hague Tel: (31) 703 486 201 62. Mr. Osvaldo N. FEINSTEIN Fax: (31) 703 486 336 Manager, OEDPK E-mail: kliest@iob.minbuza.nl World Bank 1818 H Street Washington, D.C. List of Participants 253 Tel: (1) 202 458 0505 UNDP Fax: (1) 202 522 3125 E-mail: ofeinstein@worldbank.org 67. Mr. Arild HAUGE Evaluation Advisor 63. Mr. Anwar SHAH Evaluation Office Principal Evaluation Officer, OEDCR UNDP, Room DC1-427 World Bank 1 UN Plaza, N.Y.C. 1818 H Street 10017 New York Washington, D.C. Tel: (1) 212 906 5066 Tel: (1) 202 473 7687 Fax: (1) 212 906 6008 Fax: (1) 202 522 3124 E-mail: ahauge@undp.org E-mail: ashah@worldbank.org INTER-AMERICAN DEVELOPMENT ECONOMIC DEVELOPMENT BANK INSTITUTE-LEARNING AND LEADERSHIP CENTER 68. Mr. Jean S. QUESNEL Director, Evaluation Office 64. Mr. Ray RIST Tel: (1) 202 623 2876 Evaluation Advisor, EDI Fax: (1) 202 623 3694 Tel: (1) 202 458 5625 E-mail: jquesnel@iadb.org Fax: (1) 202 522 1655 E-mail: rrist@worldbank.org WEST AFRICAN DEVELOPMENT BANK 65. Mr. Keith MACKAY Evaluation Officer, OED/EDI 69. M. Hyacinthe YAMEOGHO Tel: (1) 202 473 7178 Directeur de l'Evaluation des Fax: (1) 202 522 1655 op&ations et de l'Audit Interne E-mail: kmckay@worldbank.org Banque Ouest-Africaine de D6veloppement (BOAD) AFRICA REGION BP 1172 Lome 66. Mr. Jaime BIDERMAN Tel: (228) 21 59 06 Lead Specialist, AFTSI ou 21 42 44 ou 21 01 13 E-mail: jbiderman@worldbank.org Fax: (228) 21 52 67 ou 21 72 69 ou 25 72 69 E-mail: hyaneogo@boad.org 254 List of Participants DEVELOPMENT BANK OF SOUTHERN Fax: (225) 20 40 80 AFRICA (DBSA) E-mail: Eldersh@Afdb.org 70. ML Tladi P. DITSHEGO 73. M. E. G. TAYLOR-LEWIS Evaluation Specialist Manager, Operations par pays Nord Operations Evaluation Unit (OCDN. 2) Development Bank of Southern Tel: (225) 20 41 99 Africa Fax: (225) 20 49 19 P.O. Box 1234 E-mail: g.taylor-lewis@afdb.org Midrand, Halfway House 1685 South Africa 74. ML T. GUEZODJE Tel: (27) 11 313 3087 or 82 924 2882 Principal Country Economist Fax:(27) 113133086or3133411 (OCDW) E-mail: tladid@dbsa.org Tel: (225) 20 46 67 Fax: (225) 21 63 73 AFRICA CAPACITY BUILDING FOUN- E-mail: T. Guezodje@Afdb.org DATION (ACBF) 75. Mr. A. D. BEILEH 71. Dr. Apollinaire NDORUKWIGIRA Manager, OCOD.1 Principal Programme Officer Tel: (225) 20 41 50 African Capacity Building Fax: (225) 20 42 20 Foundation (ACBF) E-mail: a.beileh@afdb.org Southampton Life Centre 7th Floor P.O. Box 1562 76. M. Henock KIFLE Harare Directeur, CADI Tel: (263) 4 702 931 or 702 932 Tel: (225) 20 40 65 Fax: (263) 4 702 915 Fax: (225) 20 55 44 E-mail: root@acbf.samara.co.zw E-mail: h.kifle@afdb.org AFRICAN DEVELOPMENT BANK 77. M. Mohamed TANI Charge de formation Superieur, 72. DL A. EL DERSH (CADI) Administrateur, President du Comite Tel: (225) 20 42 44 des Operations et Efficacite du Fax: (225) 20 55 44 developpement E-mail: m.tani@afdb.org Tel: (225) 20 40 17 List of Participants 255 78. Mr. Knut OPSAL Tel: (225) 20 43 16 Manager, OESU Fax: (225) 20 59 87 Tel: (225) 20 41 26 E-mail: n.sangbe@afdb.org Fax: (225) 20 50 33 E-mail: k.opsal@afdb.org 84. M. A.B. SEMANOU Charge de Post-evaluation Principal, 79. Mr. Gabriel NEGATU OPEV NGO Coordinator, OESU Tel: (225) 20 43 63 Tel: (225) 20 52 29 Fax: (225) 20 59 87 Fax: (225) 20 50 33 E-mail: b.semanou@afdb.org E-mail: g.negatu@afdb.org 85. M. T. Dogbe KOUASSI 80. M. G.M.B. KARIISA Charge de Post-evaluation Pricipal, Directeur, OPEV OPEV Tel: (225) 20 40 52 Tel: (225) 20 45 14 Fax: (225) 20 59 87 Fax: (225) 20 59 87 E-mail: g.kariisa@afdb.org E-mail: t.kouassi@afdb.org 81. M. M.H. MANAI 86. M. M. THIAM Charge de Post-evaluation En chef, Charge de Post-evaluation Principal, OPEV OPEV Tel: (225) 20 45 16 Tel: (225) 20 45 11 Fax: (225) 20 59 87 Fax: (225) 20 59 87 E-mail: m.manai@afdb.org E-mail: m.thiam@afdb.org 82. M. W. BYARUHANGA 87. M. T.T. KATOMBE Charge de Post-evaluation Principal, Charge de Post-evaluation Principal, OPEV OPEV Tel: (225) 20 44 22 Tel: (225) 20 46 57 Fax: (225) 20 59 87 Fax: (225) 20 59 87 E-mail: w.byaruhanga@afdb.org E-mail: t.katombe@afdb.org 83. M. N. SANGBE 88. Mme. Gennet YIRGA HALL Charge de Post-evaluation Principal, Chargee de Post-evaluation Principal, OPEV OPEV 256 List of Participants Tel: (225) 20 42 94 Fax: (91) 11 215 4544 Fax: (225) 20 59 87 E-mail: V.R.MEHTA E-mail: g.hall-yirga@afdb.org [vrmehta@giasdlOl.vsnl.net.in] 89. Mme Z. KHORSI OBSERVERS Chargee de Post-evaluation Principal, OPEV 93. M. Savina AMMASSARI Tel: (225) 20 43 38 Via Aladino Govoni, 35 Fax: (225) 20 59 87 00136 Roma E-mail: z.khorsi@afdb.org Italy Tel/Fax: (39) 06 3545 1017 CONSULTANTS E-mail: contacts@globeaccess.net 90. M. Raymond GERVAIS 7510 De Lorimer # 2 Montreal-Quebec H2E 2P3 Canada Tel: (1) 514 722 7193 Fax: (1) 514 376 0240 E-mail: gervaisr@total.net 91. M. Jacques TOULEMONDE C3E 13b Place Jules Ferry 69006 Lyon France Tel: (33) 4 72 83 7880 Fax: (33) 4 72 83 7881 E-mail: toulemonde@c3e.fr 92. M. Veerendra R. MEHTA Ganesh Deep 373 Anand Vihar, 'D' Block New-Delhi 110092 India Tel: (91) 11216 5500 or 11 215 2483 List of Participants 257 Summary Version of Agenda Regional Seminar and Workshop on Monitoring & Evaluation Capacity Development in Africa Abidjan, November 16-19, 1998 Summary Agenda Day 1: Monday, November 16 Part I - REGIONAL SEMINAR ON MONITORING AND EVALUATION DEVELOPMENT Opening Session Mr. Ahmed BAHGAT, Vice President, Finance and Planning, AfDB: Welcome Message His Excellency, Mr. Tidjane THIAM, Minister of Planning and Development Programming, C6te d'Ivoire: Opening Statement Mr. G. KARIISA, Director, Operations Evaluation Department, AfDB: Introduction: The Goals of the Workshop Mr. Niels DABELSTEIN, Head, Evaluation Secretariat, DANIDA, and Chair, DAC Working Party on Aid Evaluation: The DAC and the Regional Conferences on Evaluation: A Follow-Up Theme 1: Monitoring and Evaluation in Africa Moderator: Mr. Osvaldo FEINSTEIN, Manager, OEDPK, World Bank Mr. Arild HAUGE, Senior Adviser, Evaluation Office, UNDP: UNDP Experience in ECD Mr. Raymond GERVAIS, Consultant: Monitoring and Evaluation in Africa 259 Theme 2: Building a Monitoring & Evaluation System for Improved Public Sector and Expenditure Management Moderator: Dr. A. EL DERSH: Chairperson of the ADB Boards Committee of Operations and Development Effectiveness, CODE- AfDB. Mr. Robert PICCIOTTO, Director-General, Operations Evaluation Department, World Bank, and Chair, MDB Evaluation Cooperation Group, World Bank: Developing Evaluation Capacity: The Road Ahead Mr. Jean S. QUESNEL, Director, Evaluation Office, Inter-American Development Bank: Evaluation Capacity Strengthening in the Latin- American and Carribean Region Getting Started-Experiences of African Countries Moderator: Mr. Arild HAUGE, Evaluation Adviser, UNDP. Nodal Ministry Approaches Mr. Kaddour TAHRI, Directeur, Centre National d'evaluation des Programmes, Ministere de la Prevision Economique et du Plan, Morocco: Evaluation Development in Morocco: Approach through Training Ms. M. MANDINYENYA, Deputy Director, National Economic Planning Commission (NEPC), Zimbabwe: Monitoring and Evaluation: The Case of Zimbabwe Decentralized Sectoral Approaches Mr. Isaac ADAMS, Head, Information Monitoring & Evaluation Unit, Ministry of Health, Ghana: Sector-Wide Approach to Monitoring and Evaluation in the Health Sector Mr. Abdelaziz BELOUAFI, Chef de Division, Suivi-evaluation- Direction de la Programmation et des Affaires 6conomiques, Ministere de l'Agriculture, Morocco: Agricultural Development Monitoring and Evaluation System: SSEDA 260 Summary Version of Agenda Linking Fiscal Management to M&E Moderator: Mr. Anwar SHAH, Principal Evaluation Officer, Operations Evaluation Department, World Bank Mr. Michael WAMIBU, Chief Economist, Ministry of Finance, Uganda: Monitoring and Evaluation: Capacity Building in Uganda with a Focus on Public Expenditure M. Mamadou BAH, Directeur National de la programmation pluri- annuelle, Ministere du Plan et de la Cooperation, Guinea: Monitoring and Evaluation System and Responsibilities in Guinea Mr. S. SIBANDA, Deputy Director, Domestic & International Finance, Ministry of Finance, Zimbabwe: Linking Monitoring and Evaluation to Decisionmaking in Zimbabwe A Summary of Discussions on Building a Monitoring & Evaluation System in the Context of Public Sector Improvement: Issue Identification, R. GERVAIS, Consultant Group Work Group Work Facilitators: Mr. Gabriel KARIISA, Director, Operations Evaluation Department, AfDB Mr. 0. FEINSTEIN, Manager, OEDPK, World Bank Mr. J. TOULEMONDE, Consultant Working Dinner. Special Session: Presentation of the Toolbook for Public Expenditure Analysis. Mr. Anwar SHAH, OEDPK, World Bank Day 2: Tuesday, November 17 Plenary Session: Group Reports Moderator: J. TOULEMONDE, Consultant Summary Version of Agenda 261 Theme 3: Monitoring and Evaluation Capacity Building Strategies and Resources for Building Evaluation Supply and Demand Introduction to Parallel Sessions and Group Formation: Mr. Osvaldo FEINSTEIN, Manager, OEDPK, World Bank Session a: Resources Moderator/Facilitator: Mr. H. KIFLE, Director, African Development Institute, AfDB Mr. Stein-Erik KRUSE, Centre for Partnership in Development (DIS), Norway: Development through Institutions: A Review of Institutional Development Strategies in Norwegian Bilateral Aid Dr. Apollinaire NDORUKWIGIRA, Principal Programme Officer, African Capacity Building Foundation (ACBF): African Research Institutions and Universities as Resources Mr. Hudu SIITA, Head, Project Implementation Monitoring & Evaluation Unit, Policy Analysis Division, Ministry of Finance, Ghana: Project Implementation Monitoring and Evaluation-Ghana's Experience Session b: Partnerships Moderator/Facilitator: Mr. Bisi OGUNJOBI, Director, Country Operations North, AfDB Mr. Niels DABELSTEIN, Head, Evaluation Secretariat, Danida, and Chair, DAC Working Party on Aid Evaluation: Conducting Joint Sector! Thematic Evaluations Mr. Christopher RALEIGH, Head, Evaluation Department, Depart- ment for International Development (DfID), United Kingdom Country/Donor Partnership 262 Summary Version of Agenda Session c: Participation & Feedback Moderator: Mr. T. NKODO, Director, Central Operations & Policy Department (OCOD), AfDB Ms. Ann-Marie FALLENIUS, Director, Head of Evaluation and Internal Audit, Sida-Sweden: Stakeholders' Participation in Design! Conduct of Evaluations Mr. K. OPSAL, Manager, Environment and Sustainable Development Division (OESU), AfDB: Role of NGOs in Evaluation Mr. Raymond GERVAIS, Consultant: Thinking Through Reporting and Dissemination Plenary Discussion: Session Reports Mr. Gabriel KARIISA, Director, Operations Evaluation Department, AfDB Moderator: Mr. Osvaldo FEINSTEIN, Manager, OEDPK, World Bank Theme 4: Applying a Framework for ECD Moderator: Ms. Linda G. MORRA, Coordinator for Training, Partner- ships, and ECD, OEDPK, World Bank Mr. Keith MACKAY, Senior Evaluation Officer, EDIES, World Bank: A General Framework for ECD Mr. Ray RIST, Evaluation Adviser, EDIES, World Bank: Selecting an ECD Approach Country Group Discussions to Consider the ECD Framework and Approaches Facilitators: Mr. Osvaldo FEINSTEIN, Manager, OEDPK, World Bank Mr. Raymond GERVAIS, Consultant Summary Version of Agenda 263 Mr. Keith MACKAY, Senior Evaluation Officer, EDIES, World Bank Mr. Tladi DITSHEGO, DBSA Mme Monique HUGON, Direction Gnerale des relations culturelles, scientifiques et techniques, Ministere des Affaires Etrangeres, Cooperation et la Francophonie, France Reporting Out Moderator: Mr. J. TOULEMONDE, Consultant Country Group Reports Wrap-Up Session Mr. G. KARIISA, Director, Operations Evaluation Department, AfDB Mr. Niels DABELSTEIN, Head Evaluation Secretariat,Danida, and Chair, DAC Working Party on Aid Evaluation Day 3: Wednesday, November 18 Part II - WORKSHOP ON ACTION PLAN DEVELOPMENT AND NEXT STEPS Mr. Keith MACKAY, Senior Evaluation Officer, EDIES, World Bank: General Approach for ECD Action Plan Development Country Group Discussions on an Action Plan Group Reports Day 4: Thursday, November 19 Part III - EVALUATION TRAINING WORKSHOP Mr. H. KIFLE, Director, African Development Institute: Opening and Introduction 264 Summary Version of Agenda Mr. Ray RIST, Evaluation Advisor, EDIES, World Bank: Evaluation Concepts Ms. Linda MORRA, Senior Evaluator, OEDPK, World Bank: Evaluation Design Ms. Linda MORRA, Senior Evaluator, OEDPK, World Bank: Data Collection And Analysis Strategies Mr. M.H. MANAI, Chief Evaluation Officer, AfDB: Monitoring Mr. R. GERVAIS, Consultant: Utilization and Dissemination Mr. Ray RIST, Evaluation Advisor, EDIES, World Bank: Additional Training Resources Summary Version of Agenda 265 Statement by Switzerland Catherine Cudre-Mauroux To the Swiss Development Cooperation, "monitoring" and "evalua- tion" are part and parcel of development activities. These tools need a cooperative environment in which everyone, in coordination with partners, is ready to observe his/her own activities with a critical eye, and to draw the necessary conclusions for future action and behav- ior. The Swiss Development Cooperation believes that the concepts of monitoring and evaluation fit into a global concept called PMEI- planning, monitoring, evaluation, and implementation. An example of the importance we attach to the concept of evaluation is our strong support to the evaluation of the "Special Programme for Africa." Our interest is commensurate with our constant concern to draw lessons from what has been done, and deepen and improve project quality, coordination, and partnership. The Swiss Development Cooperation considers it particularly important to participate in such a seminar in order to better understand the approach and needs of African countries in matters of monitoring and evaluation, and the reason for undertaking such an exercise. Hence, we pay special attention to what we call our focus countries. Indeed, the Swiss Development Cooperation is involved in a number of countries from among the countries represented at this seminar. These focus countries are Mozambique, Burkina Faso, Tanzania, and South Africa. We give support to other participant countries through multilateral institutions. 267 AFRICAN DEVELOPMENT BANK Communications Unit Telegraphic Address 01 P. O. Box 1387 AFDEV ABIDJAN 0 1 C6te D Ivoire Telex: 23263 23717 Telephone: (225) 20-41-18 20-55-40 20-55-41 20-55-42 Facsimile: (225) 20-40-06 20-55-39 E-mail: comuadb@afdb.org World Wide Web: http://wwxv.afdb.org/ QEID Operations Evaluation Department Partnerships & Knowledge Programs (OEDPK) E-mail: ecampbellpage@worldbank.org E-mail: eline@worldbank.org Telephone: 202-473-4497 Facsimile: 202-522-3125 THE WORLD BANK 1818 H Street, NAV Washington, D.C. 20433, U.S.A. Telephone: 202-477-1234 Facsimile: 202-477-6391 Telex: MCI 64145 WVORLDBANK MCI 248423 WORLDBANK WNorld WideAWeb: http://wwvw,.w,orIdbank.orgl World Bank InfoShop E-mail: pic@worldbank.org Telephone: (202) 458-5454 Facsimile: (202) 522-1500