38251 INDEPENDENT EVALUATION GROUP THE WORLD BANK IEG E VALUATION C APACITY D EVELOPMENT ECD WORKING PAPER SERIES • 16 MAY 2006 Experience with Institutionalizing Monitoring and Evaluation Systems In Five Latin American Countries: Argentina, Chile, Colombia, Costa Rica and Uruguay Ariel Zaltsman This paper provides a comparative analysis of five countries which have sought to institutionalize government-wide monitoring and evaluation (M&E) Experience with Institutionalizing systems. Among the many lessons are the Monitoring and Evaluation Systems In Five strong advantages of having high-level Latin American Countries: Argentina, support, and the benefits of Chile, Colombia, Costa Rica and Uruguay coordination among different stakeholders and systems. A number of strong features of Chile’s main M&E system are also noted. ECD Working Paper Series ♦ 16 Ariel Zaltsman www.worldbank.org/ieg/ecd May 2006 The World Bank Washington, D.C. Copyright 2006 Independent Evaluation Group Knowledge Programs & Evaluation Capacity Development Email: eline@worldbank.org Telephone: 202-473-4497 Facsimile: 202-522-3125 Evaluation Capacity Development (ECD) helps build sound governance in countries—improving transparency, and building a performance culture within governments to support better management and policymaking, and to strengthen accountability relationships—through support for the creation or strengthening of national/sectoral monitoring and evaluation systems. A related area of focus is civil society, which can play a catalytic role through provision of assessments of government performance. IEG aims to identify and help develop good- practice approaches in countries, and to share the growing body of experience with such work. The IEG Working Paper series disseminates the findings of work in progress to encourage the exchange of ideas about enhancing development effectiveness through evaluation. An objective of the series is to get the findings out quickly, even if the presentations are somewhat informal. The findings, interpretations, opinions, and conclusions expressed in this paper are entirely those of the author. They do not necessarily represent the views of the Independent Evaluation Group or any other unit of the World Bank, its Executive Directors, or the countries they represent. CONTENTS Foreword…………………………………………………………………………………....... i Abbreviations………………………………………………………………………………… ii Executive Summary…….………………………………………………………………….... iv 1. INTRODUCTION……………………………………………............………... 1 2. CONFIGURATION OF THE M&E FUNCTION………............…………... 2 3. THE M&E SYSTEMS’ OBJECTIVES AND ORIGINS…............………… 3 4. THE SYSTEMS’ IMPLEMENTATION AND SUBSEQUENT 6 DEVELOPMENT…..……………………………..…...........………………... 5. THE SYSTEMS’ LEGAL FRAMEWORK……………............……............. 11 6. M&E SYSTEM ARCHITECTURE……....................………….……………. 14 6.1 The Systems’ Components and M&E Activities………......……. 14 6.2 Organizational Framework and Distribution of Roles…........…. 18 6.3 Information Flows and Reporting Arrangements…….……........ 23 6.4 Linkage Between M&E and Budgeting……………….................. 27 7. FINAL REMARKS AND LESSONS DRAWN…….............…...................... 30 Annex List of People Interviewed or Consulted.......................................................... 35 BIBLIOGRAPHY...…………………………..………………………………....................... 38 FOREWORD The Independent Evaluation Group (IEG) ― formerly known as the Operations Evaluation Department (OED) ― of the World Bank has a long-standing program of support to strengthen monitoring and evaluation (M&E) systems and capacities in developing countries, as an important part of sound governance. As part of this support, IEG has prepared a collection of resource material including case studies of countries which can be viewed as representing good-practice or promising- practice. This resource material is available at: http://www.worldbank.org/ieg/ecd/ A growing number of governments in Latin America are working to strengthen their national M&E systems, and there are many important lessons from these efforts. The purpose of this paper is to provide a detailed analysis and taxonomy of the M&E systems in five countries in the region; together, these countries possess eight national systems for monitoring and/or for evaluation. The paper identifies the main objectives and intended uses of each of these systems, the way in which each system was developed over time, their legal frameworks, and the M&E system architecture ― such as roles and responsibilities, the extent of coordination, and reporting arrangements. Some limited evidence on the extent of utilization of the M&E information produced by each system is also presented. Utilization is the bottom-line measure of a system’s effectiveness and usefulness; but to measure this issue for each system would require a series of separate, and detailed, reviews. The World Bank has recently completed such a review for Chile’s main M&E system (World Bank, 2005). The task manager for the comparative analysis presented in this paper was Yasuhiko Matsuda (LCSPS). The editor of this, and the other, working papers in the series was Keith Mackay (IEGKE). The views expressed in this paper are solely those of the author, and do not necessarily represent the views of the World Bank. Klaus Tilmes Manager Knowledge Programs & Evaluation Capacity Development i ABBREVIATIONS BEB Budget Evaluation Bureau BGI Comprehensive Management Report BAPIN National Bank of Investment Projects CCC Commitment-with-the-Citizen Charter CdR Results-Based Commitments CENOC National Community Organizations’ Center CEPRE Public Sector Reform Executive Committee CFGP Central Fund of Governmental Priorities CLAD Latin American Center for Development Administration CONADIS National Advising Commission for the Integration of Handicapped People CONPES National Economic and Social Policy Council CSR Comprehensive Spending Review DCI Division of Inter-ministerial Coordination DEE Special Division of Management Evaluation and Control DEPP Public Policy Evaluation Bureau DIPRES National Budget Bureau DIPRO Division of Organizational Planning and Reengineering DNP National Planning Department EBPAO Annual Operational Plan Basic Structure EPG Evaluation of Governmental Programs EVO Inter-American Development Bank’s Evaluation Office IDB Inter-American Development Bank IE Impact Evaluation INN National Standardization Institute MCS Management Control System MDS Ministry of Social Development M&E Monitoring and evaluation MIDEPLAN Ministry of Planning ONP National Budget Office NCCSP National Council for the Coordination of Social Policies ONIG National Office of Public Management Innovation ONP National Budget Office OPP Office of Planning and Budgeting PAG Annual Management Plans PAO Annual Operational Plan PARP Public Administration Renovation Program PEG Strategic Management Plans PFMS Physical and Financial Monitoring System PI Performance indicator PMG Management Improvement Program PRyME State Reform and Modernization Project RAP Activities and Programs Review RBMS Results-Based Management System ii SAI Supreme audit institution SEGPRES Ministry General Secretariat of the Presidency SEV Results-Based Management Evaluation System SFRPIE Standardized Funding Request for Programs’ Innovation and Expansion SIEMPRO System of Information, Monitoring and Evaluation of Social Programs SIG Management Information System SIGFE Information System for the State’s Financial Management SIGOB System of Presidential Targets’ Programming and Management SINE National Evaluation System SINERGIA Public Management Results Evaluation System SSPG Governmental Programming Monitoring System SSPPR Situational Strategic Planning and Process Re-engineering iii EXECUTIVE SUMMARY This paper presents a comparative analysis of the ways in which Argentina, Chile, Colombia, Costa Rica and Uruguay have organized their monitoring and evaluation (M&E) functions. The analysis focuses on government-wide M&E initiatives only. These five countries have structured their M&E functions in a variety of ways. In Colombia, Costa Rica, and Uruguay, the functions are concentrated in one single system. These systems are known, respectively, as the “Public Management Results Evaluation System” (SINERGIA), the “National Evaluation System” (SINE), and the “Results-Based Management Evaluation System” (SEV). In Chile, the M&E functions are currently organized around two systems. One of them is known as the “Management Control System” (MCS), and the other as the “Governmental Programming Monitoring System” (SSPG). These two systems were created by, and remain under the jurisdiction of, different institutions. Nevertheless, their M&E activities appear to complement each other well. Finally, in Argentina, the M&E function is structured around three independent monitoring or evaluation systems, known as the National Budget Office’s “Physical and Financial Monitoring System” (PFMS); the “System of Information, Monitoring and Evaluation of Social Programs” (SIEMPRO); and the “Results-Based Management System” (RBMS) monitoring scheme. Like Chile’s two M&E systems, these three systems were established and are still run by different institutions, but they operate with no coordination with each other. The stated objectives of these eight M&E systems can be grouped under one or more of the following five categories: (a) inform national planning; (b) support sector policy and program design, and fine- tuning; (c) inform the budget allocation process; (d) induce continuous management improvement; and (e) enhance transparency and accountability. The particular objective of each system is related to the primary concerns of the broader reform initiatives of which they were part, and to the institutional and political environment in which they developed. The implementation of all of these M&E systems followed a relatively gradual approach, with the exception of Uruguay’s SEV and Chile’s SSPG, which were launched across-the-board all at once. In general, implementation of the systems began with a series of pilot experiences, and only was applied on a larger scale after the systems’ methodologies and procedures had attained a certain degree of maturity. Participation in the M&E systems was initially voluntary, and only became mandatory after a period of years. In all cases, implementation of the M&E systems involved a significant capacity- building effort on the part of the system’s central coordinating unit, which included the provision of training, manuals and other support materials, and technical assistance and other on-going support to officials of participating agencies and programs, as well as to other stakeholders. The M&E systems and their various components were created through a variety of legal instruments. In some cases, there was an explicit decision to rely, first, on relatively more malleable legal instruments (such as decrees or protocols of agreement between the Executive and Congress), and to rely only later on a legal basis for the M&E systems and their instruments; this ensured that the methodologies and procedures of the M&E systems had enough time to reach a higher level of maturity. In general, the system legal frameworks define their objectives and functions, the responsibilities of the various parties involved in the process and, in most cases, they also indicate the iv types of M&E activities to be conducted. However, they all leave the responsibility for defining specific procedures and methodologies to the central coordinating units. Only three of the eight systems rely on both monitoring and evaluation activities: Argentina’s SIEMPRO, Chile’s MCS, and Colombia’s SINERGIA. The other five systems all base their assessments on performance monitoring alone. In addition, the systems also differ in the levels of public sector performance that they monitor or evaluate. Thus, Argentina’s PFMS and SIEMPRO assess program-level performance; Argentina’s RBMS and Uruguay’s SEV focus on organizational performance; and Colombia’s SINERGIA assesses program and sector level performance. Finally, Chile’s SSPG monitors both policy- and agency-level performance, and Chile’s MCS and Costa Rica’s SINE monitor institutional and program-level performance. The eight M&E systems examined in this report have been set up and remain under the control of Executive Branch institutions. Although all the systems have the stated objective of helping to hold governments accountable, Costa Rica’s SINE is the only one where a supreme audit institution (SAI) that is independent of the Executive Branch participates in the preparation of the M&E agenda. In Colombia, SINERGIA’s authorities plan to engage civil society organizations in the analysis and dissemination of the information that the M&E system produces, so as to strengthen its role as an instrument of social control. The roles that the systems assign to their most immediate stakeholders follow some common patterns. Thus, with the exception of Costa Rica’s SINE, the basic elements of the evaluation agenda are determined by the institution that is ultimately responsible for each system. The development of the system methodologies is always in the hands of the coordinating units. And the definition of the performance indicators and targets on which the systems base their assessments involve, to a greater or lesser extent, the participation of both the assessed programs and institutions and also the systems’ coordinating units. Moreover, the information that feeds into the systems is always provided by the assessed programs and institutions themselves. Finally, it is always the coordinating unit which is in charge of issuing the final performance assessments, save for Argentina’s RBMS. The greatest differences across systems revolve around the level of leadership and control that they assign to their coordinating units, and the role of the assessed programs and institutions in the definition of the indicators and targets. In the context of the evaluation components of the three systems that undertake this type of assessment, the decision as to what policies, programs or organizations will be evaluated is made by the system’s sponsoring institution and/or some other entity independent of the programs or institutions to be evaluated, such as Congress or an inter-ministerial committee. Except for SINERGIA, which discusses with line ministries the type of evaluation to be undertaken, the evaluated institutions are completely excluded from this critical decision as well. The actual undertaking of the evaluations is always commissioned to external consultants or institutions that are selected through public bidding processes. In the three cases, the supervision of the evaluation process lies with the systems’ coordinating units but, in Chile’s MCS, this role is to some degree shared with an inter-ministerial committee and the evaluated programs and agencies themselves. The programs and agencies that are evaluated are invited to react to the draft and final evaluation reports, and they submit observations that are then attached to the official evaluation report as an annex. The systems’ M&E findings are always conveyed through different types of report. In some of the cases, the contents of these reports are tailored to the specific information needs of the intended reader. Some systems have begun experimenting with reader-friendly report formats that are written v in very plain language, and make extensive use of graphs. This is done in an attempt to overcome the difficulties are experienced by many of the intended information users in assimilating the original reports, which they found exceedingly lengthy and technical. In the five countries, the internet serves as the main channel of public dissemination. But some of the systems also rely on other public dissemination means, such as press conferences and other channels involving the mass media. Monitoring information is often available through intranet and internet systems, which provide the various stakeholders with different levels of access. Most of the systems have had the objective of promoting the use of M&E information and performance improvements by establishing budgetary or other institutional incentive mechanisms, but few have succeeded. The system that has accomplished most in this regard is Chile’s MCS, which has set up a variety of incentives targeted both at the evaluated programs and agencies and at the budget decision-makers at the Ministry of Finance. In the other systems, for the most part, the evaluated agencies’ and programs’ main incentive to pay attention to and make use of this information is the fact that their performance is now being measured and tracked, and the resulting assessments are circulated both within government and publicly. One of the most common obstacles to integrating M&E findings into the budget process arises from the inability of existing budget classifications to link policy and program goals and objectives with specific budget allocations. In principle, program-based budget classifications ― which is something that the five countries have either adopted or are intending to do ― should be able to maximize the benefits of M&E information for budget decision-making purposes. However, simply having such a program classification does not produce performance-based budget decision-making. But it also appears that when the decision to integrate performance considerations into the budget process comes from the highest levels of government, this can be achieved even in the absence of program-based budgeting. vi 1. INTRODUCTION∗ The objective of this paper is to examine the ways in which a number of Latin American countries have organized their monitoring and evaluation (M&E) functions, with a view to drawing lessons for the further development of a national M&E system in Brazil. For that purpose, the paper presents a comparative analysis of the M&E systems of Argentina, Chile, Colombia, Costa Rica and Uruguay. For the most part, the selection of national cases has been based on data availability. However, the fact that these five experiences are among the most documented ones in the region is probably an indication of their relative importance. The analysis focuses on government-wide M&E systems only. That is, it does not cover sector- specific efforts which, at least in some of these countries, coexist with the initiatives examined here. Similarly, in the context of this paper, we reserve the phrase M&E for a variety of ongoing and retrospective policy, program and agency assessments. The analysis does not include ex-ante appraisal systems, such as those that only rely on cost-benefit or cost-effectiveness analysis, or other prospective assessment methods. The paper is based on four information sources. The first of these sources is the relatively small number of studies that have been published on these country cases. The second source is the legislation that created and regulates the different M&E initiatives and their various components. As a third source, the paper used a large number of documents available on the M&E systems’ web sites or obtained directly from their coordination units. And the fourth and last source is a number of in- person and telephone interviews, and e-mail consultations with current or past M&E system officials or stakeholders from the five countries. Most of the interviews were conducted between January 2003 and October 2004 in the context of other projects.1 These have been complemented with a new round of telephone and e-mail consultations that were done between May 2005 and March 2006, with the objective to update the information already collected and address some of the specific information needs of this paper.2 The main body of the paper consists of this introduction and five other sections. The next section describes how the M&E function is organized in each of the five countries. Section 3 discusses the M&E systems’ origins and objectives. Section 4 outlines their implementation strategies and their subsequent developments. Section 5 provides a brief overview of the systems’ legal frameworks. Section 6 contains a comparative characterization of the five countries’ approaches to M&E; and Section 7 presents a set of final remarks and suggests a number of lessons to be drawn from these countries’ experiences. More detailed reviews of each of the country cases are available from the author on request.3 ∗ The author would like to thank Keith Mackay (Evaluation Capacity Development Coordinator at the World Bank) and Yasuhiko Matsuda (Senior Public Sector Specialist, at the World Bank) for their insightful feedback on an earlier version of this paper. 1 These interviews and consultations were done as part of the author’s preliminary doctoral dissertation research, and a project for the Latin American Center for Development Administration’s (CLAD) ‘Integrated and Analytical System of Information on State Reform, Management and Public Policies’ (SIARE) web site: www.clad.org.ve/siare/ 2 The final draft of this paper was shared with the systems’ coordinating units which, except for Chile’s Governmental Programming Monitoring System, provided feedback. Their reactions and comments have helped refine the final version. 3 ariel.zaltsman@nyu.edu 1 2. CONFIGURATION OF THE M&E FUNCTION The countries included in this report have structured their M&E functions in a variety of ways (Table 1). In three of the countries ― Colombia, Costa Rica, and Uruguay ― the function is concentrated in one single system. These systems are known as the “Public Management Results Evaluation System” (SINERGIA) in Colombia; the “National Evaluation System” (SINE) in Costa Rica; and the “Results-Based Management Evaluation System” (SEV) in Uruguay. In Chile, the M&E function is currently organized around two systems. One of them is known as the “Management Control System” (MCS), and the other as the “Governmental Programming Monitoring System” (SSPG). Unlike the other M&E initiatives in Chile, which were created by the National Budget Bureau (DIPRES) in the Ministry of Finance, the SSPG was established by the Ministry General Secretariat of the Presidency (SEGPRES). Notwithstanding its coordination with the MCS, it remains a separate system. Finally, in Argentina, the M&E function is structured around three independent monitoring and/or evaluation systems, known as the National Budget Office’s “Physical and Financial Monitoring System” (PFMS); the “System of Information, Monitoring and Evaluation of Social Programs” (SIEMPRO);4 and the “Results-Based Management System’s” (RBMS) monitoring scheme. These three systems were created by and remain under the control of different institutions, and operate with no coordination with each other. Table 1: Government-Wide M&E Systems of Argentina, Chile, Colombia, Costa Rica, and Uruguay Argentina Chile Colombia Costa Rica Uruguay National Budget Governmental Public National Results-Based Office’s Physical and Programming Management Evaluation System Management Financial Monitoring Monitoring System Results Evaluation (SINE) Evaluation System System (PFMS) (SSPG) System (SEV) (SINERGIA) System of Information, Management Monitoring and Control System Evaluation of Social (MCS) Programs (SIEMPRO) Results-Based Management System’s monitoring scheme (RBMS) 4 Unlike the other M&E systems, SIEMPRO focuses on social programs only. The reason to include it in the study is that those programs belong to different policy areas and report to six different ministries which collectively cover a large part of government spending (i.e., Ministries of Social Development, Education, Health, Labor, Economy, and Planning). 2 3. THE M&E SYSTEMS’ OBJECTIVES AND ORIGINS The eight M&E systems examined in this report have been created with a variety of objectives (Table 2). Their stated objectives can be grouped under one or more of the following five broad categories: (a) inform national planning; (b) support sector policy and program design and fine-tuning; (c) inform the budget allocation process; (d) encourage continuous management improvement; and (e) enhance transparency and accountability. Table 2: The M&E Systems’ Stated Objectives System’s Argentina Chile Colombia Costa Uruguay Objectives Rica PFMS SIEMPRO RBMS SSPG MCS SINERGIA SINE SEV National planning 9 9 9 Policy and program 9 9 9 9 9 design and fine- tuning Budget allocation 9 9 9 9 9 Inducing 9 9 9 9 9 9 management improvement Accountability & 9 9 9 9 9 9 9 transparency However, as will become clear in the subsequent sections of the paper, it is common for systems to emphasize some of their stated objectives over the others. In some cases, these differences in emphasis have changed through time. In general, the various degrees of attention that the systems have paid to their different objectives can be associated with the primary concerns of the broader reform initiatives of which they were part, and with the institutional and political environment in which they developed. Thus, Argentina’s PFMS was created in 1992 with the objectives of informing the budget allocation process, encouraging agencies’ management improvement, and enhancing transparency and accountability. The way in which it was conceived and set up, however, has emphasized the budget allocation objective over the other two, which is arguably in line with the nature of the financial administration reform that brought it into being. From a more general perspective, both the creation of PFMS and the financial administration reform were part of the Menem Administration’s “First Reform of the State” which, like all “first-generation” reform programs, was much more concerned with attaining macroeconomic equilibrium, deregulating the economy, and reducing the size of the public sector than with enhancing the government’s policy-making and management capacity. On the other hand, the creation of SIEMPRO in 1995, also in Argentina, was part of a broader initiative intended to enhance the government’s capacity to develop and implement effective policies ― in this particular case, in the domain of anti-poverty policies. In the context of this reform, 3 SIEMPRO was entrusted the mission to support the design and fine-tuning of social programs. It is important to point out, though, that neither the emergence of SIEMPRO nor the broader initiative that inspired its creation were part of an across-the-board second-generation reform program comparable to the ones that gave birth to M&E systems elsewhere. After a frustrated attempt in the latter half of the 1990s, such a government-wide reform program was launched in Argentina in 2000 and was effectively implemented.5 Other M&E systems, like Chile’s SSPG, Colombia’s SINERGIA, and Costa Rica’s SINE, emerged in the context of reform initiatives that were especially concerned with reinforcing the government’s capacity to undertake effective national planning and to align government policies and national strategic priorities. More specifically, Chile’s SSPG was first established as the so-called “Ministerial Targets” in 1990, with the objectives of assessing the ministries’ and agencies’ compliance with the President’s policy priorities and serving as an accountability tool. Its creation took place shortly after the first democratic government in nearly two decades took office, as part of an ambitious series of reforms intended to strengthen the public sector’s capacity to address society’s needs. Colombia’s SINERGIA and Costa Rica’s SINE were both created in 1994. When first launched, SINERGIA was entrusted with the five types of objective identified above, and SINE with all but the budget allocation one. However, in line with the broader reform initiatives that led to their creation, for several years they emphasized their national planning and accountability objectives over the others. For a relatively short period and due to relatively pragmatic reasons, SINERGIA placed emphasis on inducing public agency management improvement as well. But, eventually, it redirected its attention to some of its other objectives. As to its budget allocation objective, SINERGIA only started to address it at the beginning of this decade, and this appears to have been more the result of lack of coordination between the institutional unit that it reported to, and those units in charge of formulating the budget, than of a conscious choice.6 Another expression of this lack of coordination was the use of a budgetary classification that does not make an explicit connection between the government’s policy objectives and its budget allocations. SINE adopted the budget allocation objective in 2002, more or less at the same time as SINERGIA redirected its attention to it. In both cases, this development occurred in the context of new Administrations which took the adoption of results-based budgeting as one of their core policy objectives. Finally, the various M&E mechanisms that make up Chile’s MCS were created between 1995 and 2002; Uruguay’s SEV was created in 1995, and Argentina’s RBMS between 1999 and 2004. The three systems emerged in the context of public sector reforms that placed their greatest focus on improving the budget allocation process and modernizing the state’s management practices. In the case of Chile’s MCS, the system’s stated objectives are informing the budget allocation process, supporting program fine-tuning, encouraging organizational management improvements, and enhancing transparency and accountability. In the case of Uruguay’s SEV and, at the time of its creation, Argentina’s RBMS (known originally as the “Expenditure Quality Evaluation Program”), their stated objectives were the same as for MCS, except for the program fine-tuning one. In the last two or three years, RBMS appears to have dropped its objective of supporting the budget allocation process. This change occurred after the reform initiative that had inspired its creation faded, the Secretariat of Finance stopped participating in the system’s development, and the Under-Secretariat of Public Management became its only institutional sponsor. 5 This was in the context of President De la Rúa’s (1999-2001) National Modernization Plan. 6 In the case of SINERGIA, the planning, M&E and the (investment) budget formulation functions were all concentrated in the same ministry (i.e., National Department of Planning). However, there was reportedly poor coordination among the divisions responsible for each of these functions. 4 One last thing to note is that, in most of the cases, the development of the systems received financial and technical assistance from multilateral development agencies. This is likely to have affected the orientation the systems ended up taking, although it is not easy to ascertain in what ways. Thus, SINERGIA and the RBMS were supported by the World Bank; the PFMS and SEV drew on Inter- American Development Bank (IDB) assistance; SIEMPRO obtained funding from both sources; and SINE benefited from IDB, United Nations Development Program and World Bank assistance. 5 4. THE SYSTEMS’ IMPLEMENTATION AND SUBSEQUENT DEVELOPMENT Except for Uruguay’s SEV and Chile’s SSPG, where the authorities decided to launch the system across-the-board all at once, the implementation of all the other initiatives followed a relatively gradual approach (Table 3). Typicall, implementation began with a series of pilots in a small number of agencies, and was only extended to the remaining agencies and programs after the systems’ methodologies and procedures had reached a certain level of maturity. Participation in the M&E systems was initially voluntary, and only became mandatory after a number of years. In all cases, the implementation of the M&E systems involved a significant capacity building effort on the part of the coordinating unit. Such efforts included provision of training activities, manuals and other support materials, technical assistance, and on-going support to the participating agencies’ and programs’ officials as well as to other stakeholders. In some of the cases (e.g., Argentina’s PFMS and SIEMPRO, and Uruguay’s SEV), save for relatively minor adjustments that may have been needed along the way, the implementation process turned out to be relatively linear. In most other words, the systems as they exist today resemble their original design quite closely. In other cases, though, the form that the systems ended up taking after several years had much less in common with the original plan. Thus, in Chile, at the time of their creation, the various M&E initiatives that the government launched before 2000 were not part of a single and internally consistent plan. The first M&E mechanism to be created was SEGPRES’s Ministerial Targets, in 1990. In 1994 DIPRES launched its Performance Indicators (PIs), and the so-called Modernization Agreements. In 1997, DIPRES began undertaking desk reviews, known as Evaluations of Governmental Programs (EPGs); and in 1998 it replaced Modernization Agreements with its Management Improvement Programs (PMGs) and merged PIs into them. For the most part, the creation of each new M&E mechanism came to add to the functions that the preexisting ones were already fulfilling. But they were not conceived nor were managed as if they were part of a system. The first effective move in that direction occurred after the Lagos Administration took office in 2000. That year, DIPRES’s M&E mechanisms were merged under the newly established Management Control System (MCS), which was to be headed by a specifically created unit, known as the Management Control Division. From then on, three new M&E mechanisms were created. In 2001, the MCS established its Central Fund of Governmental Priorities (CFGP) and began undertaking impact evaluations, and in 2002 it conducted its first Comprehensive Spending Reviews (CSRs). And several M&E mechanisms underwent different degrees of refinement. More specifically, in 2000 the Management Control Division redefined PMGs and turned PIs into a separate M&E mechanism and, given the more reduced availability of fiscal resources to finance new projects, it replaced the CFGP in 2004 with a simpler and less costly but analogous procedure that is based on the submission of Standardized Funding Requests for Programs’ Innovation and Expansion (SFRPIEs). For their part, also in 2000, SEGPRES’s Ministerial Targets were subject to several methodological improvements and became the Governmental Programming Monitoring System (SSPG). But, unlike the M&E mechanisms that had been created by DIPRES, they remained a separate system and under SEGPRES’s jurisdiction. In the case of SINERGIA, the system’s original design included an indicator-based monitoring scheme as well as a program evaluation component. However, the evaluation component did not become fully operational until 2002. In addition, the system was originally created to monitor and evaluate the implementation of the National Development Plan’s strategic policies rather than as an 6 organizational management support instrument. However, faced with the lack of the human and financial resources that they would have needed to create an external M&E system, the system’s designers opted to base it on self-evaluations by entities. Their expectation was that the agencies’ self-evaluations would provide them with the information they needed to produce the sector policy assessments that the system had been created to produce. Given the suitability of self-evaluation to support organizational strategic management, it did not take long for the system to adopt the encouragement of organizational management improvement as one of its core objectives as well. Eventually, for reasons that are not entirely clear, the system’s coordinating unit began tightening its grip over the monitoring process and, in the early 2000s, stopped conducting organizational performance assessments and instead concentrated on program and sector policy assessment. The system has recently made considerable progress in establishing a clear connection between its performance assessments and the budget. This effort is reflected in the presentation of the national investment budget bill on a results-oriented basis. In Costa Rica, SINE’s original design included both an external monitoring component (known as the “Strategic Evaluation” component) and a self-evaluation one. However, the latter component was not implemented. In 2002, those two originally conceived components were merged into one that combines external monitoring and diagnostic self-assessment by entities. This new development occurred in the context of an increasing coordination of actions between the Ministry of Planning (which is the system’s institutional sponsor), the Ministry of Finance and the Comptroller General’s Office. The current arrangement allows them to centralize requests for the information they need from the evaluated agencies in one single instrument and procedure, as well as facilitate information sharing among the three. At least as importantly, cooperation among the three institutions has also made it possible to adopt a results-based budget classification that has finally allowed SINE to attain a much more direct connection to the budget process. In Argentina, the RBMS was first known as the “Expenditure Quality Evaluation Program” and, initially, enjoyed significant political support from the Vice-President’s office. At the time, the development of the system was based on a joint effort among the National Secretariat of Modernization (which was created in 2000 and reported directly to the Vice-President), the Ministry of Economy’s Secretariat of Finance, and the Chief of Cabinet’s Office. The system’s original plan included three components: a “Program Agreement” component, which was meant to establish a clear link between the system’s performance assessments and the budget cycle; and the “Management Results Commitments” and “Commitment-with-the-Citizen Charter” (CCC) components, both of which focused on improving organizational management. Had the joint effort among those three institutions continued, the system could have succeeded in attaining some degree of articulation with the National Budget Office’s (ONP) PFMS.7 This, in turn, might have helped reduce the profound disconnect that exists among Argentina’s M&E efforts. However, following the resignation of the Vice-President at the end of 2000, the National Secretariat of Modernization and the Expenditure Quality Evaluation Program were moved to the Chief of Cabinet’s Office and, shortly afterwards, cooperation between the latter and the Secretariat of Finance came to an end. The Expenditure Quality Evaluation Program became the Chief of Cabinet Office’s RBMS and, in 2003, following a period of deep political turmoil, the Program Agreement and the Management Results Commitment components were interrupted. Consequently, until 2004, all performance assessment activities revolved around the CCC program. In 2004, the RBMS began implementing its second monitoring instrument, known as the Management Information System (SIG). SIG is organized as an internet- based balanced scorecard monitoring scheme. 7 The ONP depends on the Ministry of Economy’s Secretariat of Finance. 7 Table 3: M&E Systems’ Implementation Timeline Country System 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 Argentina PFMS SIEMPRO RBMS Program Agreements Management Results Commitments Commitment-with- the-Citizen Charters Management Information System Chile Ministerial Targets SSPG MCS Performance Indicators Modernization Agreements PMGs EPGs Impact Evaluations Comprehensive Spending Reviews CFGP SFRPIEs Colombia SINERGIA Results Monitoring Impact Evaluations Costa Rica SINE Strategic Evaluation Self-Evaluation (not implemented) PAOs-based monitoring Uruguay SEV References: Argentina: PFMS: Physical and Financial Monitoring System; SIEMPRO: System of Information, Monitoring and Evaluation of Social Programs; RBMS: Results-Based Management System; Chile: SSPG: Governmental Programming Monitoring System; MCS: Management Control System; PMGs: Management Improvement Programs; EPGs: Evaluation of Governmental Programs; CFGP: Central Fund of Governmental Priorities; SFRPIEs: Standardized Funding Requests for Programs’ Innovation and Expansion; Colombia: SINERGIA: Public Management Results Evaluation System; Costa Rica: SINE: National Evaluation System; PAOs: Annual Operational Plans; Uruguay: SEV: Results-Based Management Evaluation System. 8 As partially reflected in the discussion above, one of the factors that affected the systems’ developments most profoundly in the five countries was the evolution of the political environment and the government’s level of commitment to the M&E systems over time. All of the M&E systems remained in operation despite the various changes in the political affiliation of the Administrations that were in office; however, the degree of political support that they enjoyed was far from uniform. Several systems ― like Argentina’s three systems and Uruguay’s SEV ― appear to have attained their maximum impetus at the earliest stages of their development. Colombia’s SINERGIA enjoyed substantial political support right after its creation but, subsequently, deep political instability and the relative neglect that it experienced on the part of the subsequent authorities reduced its momentum dramatically, until the Administration that came to office in 2002 directed its attention to its rejuvenation. In Costa Rica, the system’s support has reportedly been constant since its creation. But even in Chile, where the MCS enjoyed probably the highest levels of governmental commitment, the systems’ level of support did undergo some fluctuations. A second factor that is likely to have influenced the development of some of these systems is the various diagnostic studies of their operation that were undertaken at different points in time. Examples of these are: (a) the internal appraisal studies that Argentina’s RBMS coordinating unit undertook of its own functioning and results; (b) the frequent undertaking of ad-hoc studies and different types of analysis that Chile’s MCS coordinating unit commissions or conducts itself, to assess the workings of its various components;8 and (c) the focus-group-based study and a World Bank review that Colombia’s SINERGIA commissioned in the early years after its creation. Finally, in addition to the frequent difficulty in obtaining the necessary level of political support following their creation, the development of the M&E systems encountered several other challenges, some of which they are still trying to overcome. One of the first challenges that many of these initiatives needed to address, very early on in the implementation process, originated in the insufficient clarity with regard to the missions, goals, and objectives of the agencies and programs that they were intended to evaluate. This made it extremely difficult to assess whether the evaluated agencies and programs were achieving their intended outcomes. Thus, the implementation of most of the M&E systems was preceded by different types of strategic planning process that not only brought clarity to each specific agency’s and program’s objectives, but also regarding how those objectives related to higher and lower-level objectives (e.g., how an agency’s mission and objectives are connected to both the objectives of its responsible ministry’s sector policies and those of its programs). In the case of Uruguay, this process resulted in a redesign of many of the state’s organizational structures. Another prevalent problem was the limited receptiveness (if not open resistance) that the systems enountered in the agencies and programs that were to be monitored or evaluated. This unwillingness to cooperate was mainly caused by their apprehension towards the possible consequences of an unfavorable evaluation. This fear turned out to be less pronounced when the systems’ practices involved an important degree of joint work between the coordination unit and the agencies, as in Argentina’s RBMS. A third common problem was the lack of baseline information. This problem disappeared progressively, as the subsequent performance assessment cycles began producing the kind of information that until then had been unavailable. 8 The World Bank was commissioned to undertake at least one of these studies. 9 A fourth challenge originated in the usual difficulties in coordinating actions among different institutions. A number of systems have attempted to overcome this problem by establishing inter- ministerial committees, but these have not always proved to be successful. A good case in point can be found in the frustrating experience of Argentina’s Social Cabinet in the mid-1990s. The work of the Cabinet was coordinated by the Secretariat of Social Development (SIEMPRO’s running agency), which was neither as powerful as some of the other Cabinet members nor had a strong enough support from the highest political authorities to facilitate the M&E system’s implementation. By contrast, the experience of Chile’s MCS inter-ministerial committees appears to be more successful: the powerful DIPRES exerts the leading role and the other members of the committee help in ensuring an appropriate degree of articulation with other government institutions. The fifth and last problem noted here was the limited involvement of the line agency senior officials, which has usually resulted in poor organizational awareness of the systems’ objectives and practices. Some of the systems, such as Colombia’s SINERGIA and Argentina’s RBMS, are addressing this problem by requiring the direct participation of agencies’ senior officials in the negotiations that open each monitoring cycle. Once agreement is reached, the technical staff of both parties are able to prepare measurable targets. In the case of Chile’s MCS, what ended up attracting the attention of agencies’ senior officials towards the M&E system requirements and activities was the weight of DIPRES’s committed sponsorship, and the institutional and material incentives that accompany participation and compliance with the system. 10 5. THE SYSTEMS’ LEGAL FRAMEWORKS The M&E systems and their different components were created through a variety of legal instruments (Table 5). Chile’s Performance Indicators and CFGP, and Uruguay’s SEV, were introduced through national budget laws. Argentina’s PFMS and Chile’s MCS Management Improvement Programs (PMGs) were created through other laws, while Costa Rica’s SINE was established through a series of executive decrees. In the case of Colombia, the mandate to establish a national evaluation system was included in the Constitution of 1991. To implement this mandate, the Colombian government resorted to a combination of laws, a decree, and a ministerial resolution. The other M&E system created through a combination of laws and decrees was Argentina’s RBMS. The remaining M&E instruments were established through agreed protocols between the Executive and Congress (Chile’s MCS Evaluations of Governmental Programs, Impact Evaluations, and Comprehensive Management Reviews) or a ministerial resolution (Argentina’s SIEMPRO). The implementation of some systems involved an explicit decision to rely, first, on relatively more malleable legal instruments (decrees; protocols of agreement between the Executive and Congress; and some mechanisms sanctioned on an annual basis through the national budget law). For some countries, the legal basis of the M&E systems and their instruments was left for later on in the process, after their methodologies and procedures had attained some minimum level of maturity. For example, both Costa Rica’s SINE and the three evaluation components of Chile’s MCS were not turned into law until 2003. In general, the systems’ legal frameworks define the systems’ objectives, functions, the responsibilities of the various parties involved in the process (e.g., the institution in charge of creating and/or running the system, the roles of the evaluated agencies and programs, etc) and, in most cases, they also indicate the types of M&E activity to be conducted. However, they all leave the responsibility for defining the systems’ specific procedures and methodologies to their central coordinating units. 11 Table 4: The M&E Systems’ Legal Frameworks Argentina Chile Colombia Costa Rica Uruguay PFMS SIEMPRO RBMS SSPG MCS SINERGIA SINE SEV Financial Secretariat of Law 25.152 (1999) Law 18.993 Budget Law of 1995 Reformed National Planning Law Reformed Administration Social Created the (1990) and subsequent ones Constitution of 1991 (No.5.525), (1974) Constitution of 1967 Law (1992) Development Expenditure Created Mandated the use of Mandated the Assigned the Ministry Created the OPP, Mandated the Resolution Quality Evaluation SEGPRES PIs. An attachment to National Planning of Planning introduced program creation of the No.2.851 (1995) Program, instituted Ministry and these laws includes Department (DNP) to (MIDEPLAN) budgeting, and system, assigned Established the Program entrusted the specific rules. set up an evaluation responsibility for mandated the responsibilities creation of the Agreements, and Division of system to assess the evaluation of the Executive to submit among the system, and authorized the Inter-Ministerial Annual Agreement public sector’s Nation’s economic and Accountability different defined its Chief of Cabinet to Coordination Protocols between management and social development Reports and Budget stakeholders, and coordinating sign them. (DCI) the the Ministry of results. policies. Execution Reports to indicated types of unit’s functions function of Finance and Congress. M&E activities to and Decree 229 (2000) monitoring the Congress (1997, Resolution No.63 Decrees 23.720-PLAN be conducted. responsibilities. implementation 2001 and 2002) (1994) (1994) and 24.175- Decree 104 (1968) Created the CCC of government’s Introduced EPGs, PLAN (1995) Entrusted OPP with program, and Created SINERGIA, programmatic IEs, and ECGs, the evaluation of defined its basic and defined its Defined SINE’s plan. respectively, and compliance of public features, including objectives and basic objectives, the need to assess provided a brief procedures. It set a organizational structure, agencies’ compliance agencies’ Decree 7 (1991) description of each. 12-month period for and basic procedures. with their objectives compliance with Specified DCI’s From then on, each all agencies to Mandated the and budgetary pre-agreed targets. functions year’s Agreement establish self- incorporation of M&E targets. further, and Protocols have evaluation into the budget cycle. mentioned the defined which mechanisms. Decree 140 (1995) Decree 103 (2001) Ministerial programs and created CEPRE, Empowered the Art. 11 of the National Targets for the agencies will be which was put in Chief of Cabinet to Law 152 (1994) Constitution (modified first time. evaluated. charge of reward agencies Established the in 2000) that meet their National Instituted outcome conceptualizing and Program Law 19.553 (1998) Development Plans’ evaluation and designing SEV. Agreement targets Created the PMG, preparation, accountability as successfully with mandated agencies to approval, execution, fundamental principles Decree 255 (1995) different types of define annual targets, monitoring and of the Costa Rican Established that the incentive. and introduced evaluation democracy. budget cycle must be monetary incentives. procedures. Also clearly linked to the 12 Decree 992 (2001) sanctioned the DNP’s Financial Management programs’ intended Established that Law 19.618 (1999) responsibilities for and Public Budgeting results, and mandated Management Completed the developing Law (2001) the introduction of Results PMGs’ legal SINERGIA. results-based Changed budget Commitment framework. management. classification. Mandated targets should be Law 819 (2003) greater coordination in line with the Established that the among MIDEPLAN, National Budget Law Budget Law of 2001 ones that feed into national budget has Ministry of Finance and of 1995-1999 the National Introduced Institutional to include details on the Comptroller Mandated CEPRE to Budget Office’s programs’ objectives, General’s Office. set up a budgetary PFMS. Commitments intended results, and evaluation system, management Decrees 31165-H- and agencies to Introduced the indicators. provide OPP with the CFGP. Like the PLAN (2003) and 31780-H-PLAN (2004) information that it subsequent Budget requires. Laws, it regulated the Decree 195 (2004) Provided Fund’s functioning. redefined the DNP’s methodological and organizational technical guidelines for structure and the the formulation of the Law 19.896 (2003) functions of the Annual Operational Made the undertaking Public Policy Plans. of evaluations Evaluation Bureau mandatory, and (DEPP) which defined the Ministry manages SINERGIA. of Finance’s role. 13 6. M&E SYSTEM ARCHITECTURE 6.1 The Systems’ Components and M&E Activities The systems’ approaches to M&E are based on a combination of monitoring and evaluation, or on monitoring alone (Table 5). Monitoring consists of the periodic or continuous assessment of performance based on selected indicators. On the other hand, evaluation relies on a wider variety of methods to examine the evaluated programs or activities more closely, gain a better understanding of their nuances, and produce sounder assessments of their consequences (Rossi and Freeman, 1993). Given the relatively low costs that it entails, monitoring can measure the performance of programs frequently, and for a large number of programs at the same time. However, it is unable to provide enough elements to understand the complexity of the processes involved or to distinguish the evaluated program’s effects from those of external factors. On the other hand, program evaluation is best equipped to establish the latter but, given the extended time and high costs that it involves, it can only be undertaken on a small number of programs at a time. The cost and duration of an evaluation will depend on its level of depth, rigor and comprehensiveness. But in any case, the level of coverage, promptness and economy of monitoring are always greater than those achieved by evaluation. Since their respective strengths and weaknesses make them complementary to each other, these two approaches become most effective when combined (Rossi and Freeman, 1993). Of the eight M&E systems analyzed here, only three rely on both monitoring and evaluation activities: Argentina’s SIEMPRO, Chile’s MCS, and Colombia’s SINERGIA. The other five systems all base their assessments on performance monitoring alone. Table 5: The Systems’ M&E Activities System’s Argentina Chile Colombia Costa Rica Uruguay Objectives PFMS SIEMPRO RBMS SSPG MCS SINERGIA SINE SEV Indicator-based 9 9 9 9 9 9 9 9 monitoring Program, policy or 9 9 9 institutional evaluation The Systems’ Monitoring Activities The systems’ monitoring schemes rely on a variety of indicators that track agency or program compliance with pre-established targets. In most cases, what these indicators intend to measure are efficiency, effectiveness, economy, and/or service quality. For that purpose, they include a series of physical and financial input, unit cost, output, coverage, and outcome indicators. Given the relative difficulty of measuring outcomes, all the systems tend to over-rely on input, process and output indicators. Some of the systems (especially Chile’s MCS) have been slowly advancing towards a greater inclusion of intermediate and final outcome indicators. In the specific cases of Chile’s MCS Management Improvement Program component and Argentina’s RBMS Commitment-with-the- 14 Citizen-Charter program, what the monitoring schemes are oriented to assess is the extent of progress that agencies have made in the implementation of a highly-structured agenda of organizational process improvements. The systems differ, to some extent, in the level of public sector performance that they monitor. As Table 6 shows, Argentina’s PFMS and SIEMPRO monitor program-level performance; Argentina’s RBMS, Chile’s MCS, and Uruguay’s SEV focus on organizational performance; and Colombia’s SINERGIA assesses program and sector level and program performance. Finally, Chile’s SSPG monitors both policy- and agency-level performance, and Costa Rica’s SINE monitors institutional and program-level performance. Table 6: Levels of Performance Assessed by the Systems’ Monitoring Components Level of Argentina Chile Colombia Costa Rica Uruguay Performance PFMS SIEMPRO RBMS SSPG MCS SINERGIA SINE SEV Sector-policy 9 9 Organizational 9 9 9 9 9 Program 9 9 9 9 Except for Argentina’s RBMS monitoring system, which is still in an early stage of implementation, all the monitoring systems have reached a relatively high, if not total, coverage. But there are two important challenges that, to a greater or lesser extent, all these monitoring schemes still face. The first of these challenges is in ensuring that the indicators cover all the core activities of the evaluated agencies and programs. The problem in this regard is that the performance of some activities is much easier to measure than others and, therefore, the ones that are most difficult to assess tend to be neglected. In addition, the intent to be thorough in this regard usually conflicts with the need to keep the number of indicators manageable. The second important challenge is improving the quality of the indicators used ― such as their relevance, measurability, timeliness, etc. The Systems’ Evaluation Activities The three M&E systems with program evaluation components conduct evaluations of the following types: ex-ante appraisals (e.g., cost-benefit and cost-effectiveness analyses), desk reviews, program implementation evaluations and impact evaluations. The systems’ impact evaluation studies usually rely on quasi-experimental designs and sophisticated statistical analysis techniques. In addition, Argentina’s SIEMPRO and Colombia’s SINERGIA complement their M&E activities with the undertaking of periodic diagnostic surveys and studies. In all cases, evaluations are commissioned from external consultants or institutions, which are selected through public bidding based on terms of reference defined by the systems’ coordinating units. SIEMPRO’s evaluations concentrate on program-level performance. The MCS has three evaluation components, two of which are focused on programs, while the third assesses institutional design and performance. Finally, SINERGIA undertakes evaluations of both programs and sector policies. 15 Table 7: Levels of Performance Assessed by the Systems’ Evaluation Components Level of performance SIEMPRO MCS SINERGIA (Argentina) (Chile) (Colombia) Sector-policy 9 Organizational 9 Program 9 9 9 In the case of SINERGIA and MCS, the programs, policies or agencies to be evaluated are generally selected on the basis of one or more of the following criteria: amount of public resources involved; the size and characteristics of the policy’s or program’s target population; the program’s relative importance for a specific sector policy; and the possibility of replicating the program in a different context or enlarging its scale. In addition, in the specific case of the MCS, another important criterion is the agencies’ performance as measured by their Performance Indicators and desk reviews (i.e., Evaluations of Governmental Programs). In the case of SIEMPRO, the selection criteria are not explicitly defined. According to their own estimates, Chile’s MCS has evaluated approximately 61 percent of what it defines as “evaluable expenditure”. Colombia’s SINERGIA, which began undertaking evaluations more recently, has evaluated 18 percent of the national investment budget, and expects to raise this percentage to 20 or 25 percent in coming years. Finally, in the case of Argentina’s SIEMPRO, the last available estimate dates from 1999 and represented, at that time, 9 percent of the total budget of the ministries that run the evaluated programs. This percentage is most likely to have increased since then. Coordination Between M&E Activities One last important issue to consider relates to the extent to which each of the systems that conduct several M&E activities coordinate them with each other. In this regard, Chile’s MCS seems to be the system that is dealing with this most effectively. Its various components (i.e., Performance Indicators, Management Improvement Program monitoring scheme, desk reviews, impact and institutional evaluations, and ex-ante appraisals) have been conceived explicitly with the intent to address different information needs and to complement each other. Moreover, DIPRES’ assessment of agency performance ― as measured through the system’s two monitoring components together with the findings of the desk reviews ― are stated to be some of the factors which DIPRES considers when deciding what programs it will evaluate in-depth in the subsequent year. The available evidence appears to suggest that the various M&E components of the MCS system are increasingly being used in this complementary manner. In the case of Argentina’s SIEMPRO, such complementarity among M&E activities has not been evident until recently. In Colombia, where the evaluation component was only recently implemented, it is too early to make such an assessment. 16 The intent to coordinate the different M&E activities becomes most challenging when such efforts are championed and administered by different institutional actors. Among the five countries, only Argentina and Chile present this situation. Interestingly, there is a stark contrast between the experience of these two countries in terms of coordination. On the one hand, in Chile, the M&E functions are organized around two systems: the MCS, which was created and is run by DIPRES, and the SSPG, which was established and remains administered by SEGPRES. As noted above, at the time of their creation, the three M&E mechanisms that were merged into the MCS in 2000 emerged as separate DIPRES initiatives.9 They focused on fairly distinct aspects of agency or program performance, but they were not managed in a coordinated way. Nevertheless, at the beginning of this decade DIPRES decided to turn those three mechanisms into a system, and to then add a further three components to this system. Coordination between DIPRES’s MCS and SEGPRES’s SSPG has also been growing in recent years. Thus, to define the indicators and targets that it uses to track ministry and agency performance, the SSPG takes the MCS scheme of institutional goals, objectives and products ― known as “Strategic Definitions” ― as a basis.10 Similarly, the SSPG relies on DIPRES’s Comprehensive Management Reports as a primary channel for the public dissemination of its findings. Arguably, one of the factors that may have contributed most to this increasingly harmonized approach is the high level of commitment that the influential DIPRES authorities have invested in these reforms. More generally, all the initiatives that make up DIPRES’s MCS and the SSPG alike appear to be part of a common vision in the context of which empirically-based decision-making is regarded as a desirable practice. In contrast, Argentina’s government-wide M&E activities are concentrated in three systems which function in a totally independent way from each other. As already noted, the three systems are: the PFMS, which depends on the Secretariat of Finance’s ONP; SIEMPRO, which was originally created by the then Secretariat of Social Development and now reports to the National Council for the Coordination of Social Policies; and the RBMS, which was initially developed through a joint effort among the then National Secretariat of Modernization, the Chief of Cabinet’s Office, and the Secretariat of Finance and, since 2001, has been managed by the Chief of Cabinet’s Office alone. The lack of coordination among the three systems is reflected in at least two ways. First, the PFMS and SIEMPRO, both of which assess performance at the program level, rely on different operational definitions of what ‘programs’ comprise, which makes it very difficult to combine the information that each of them produces. Secondly, there has been no systematic attempt, either on the part of the programs’ authorities or the evaluators, to link those programs’ objectives with the organizational goals and objectives that the RBMS has helped identify for some of the agencies responsible for these programs. This high level of disconnect between the three systems keeps their transaction costs higher than necessary and undercuts the potential benefits. It does this by requiring the evaluated ministries and programs to respond to multiple information requests, thereby imposing an excessive burden on them which, eventually, is most likely to conspire against the quality of the information they provide and the likelihood that they will end up using it. 9 The three mechanisms are Performance Indicators, the Evaluation of Governmental Programs, and the Management Improvement Program. 10 The so-called “Strategic Definitions” are formal statements through which agencies set forth their mission, strategic objectives, relevant outcomes, and beneficiaries, clients, and users, and specify the way in which they relate to their ministries’ strategic sector policy objectives. 17 Arguably, the profound fragmentation that prevails among Argentina’s M&E efforts is associated with the fact that they developed under very different conditions from the ones in Chile. In Argentina, efforts to enhance the institutional capacity and management practices of the public sector, and to enrich policy and decision-making through ensuring M&E and other empirical information are available, have not been given the same priority as they have in Chile. In addition, in a political context where, unless an initiative is championed or at least openly blessed by the President, turf- battles and inter-ministry rivalries usually outweigh the initiative’s merits, the fact that the three systems had different institutional sponsors is most likely to have been a serious drawback. The effects of all these factors may have been exacerbated by the various periods of political instability and the consequently high turnover of the senior officials who conceived or championed some of these initiatives. 6.2 Organizational Framework and Distribution of Roles In order to be effective, M&E systems need to be organized in a manner that ensures both the relevance and the credibility of the information they produce. A sound way to ensure the relevance of the systems’ M&E assessments is by involving their expected users in the definition of what policies, programs or aspects of performance are to be evaluated (Mokate, 2000). On the other hand, to attain an acceptable level of credibility, it is usually desirable to maintain some substantive level of independence between those who control or manage an M&E system and those who have a direct stake in the evaluated programs. This second condition is especially important when the information’s main expected users are external to the evaluated policy, program or agency. When M&E findings are primarily targeted at the agents responsible for the evaluated processes themselves, ensuring a high level of involvement and receptiveness on their part becomes more important than the information’s external credibility (Ala-Harja and Helgason, 1999). One of the greatest challenges that M&E system designers face originates in the fact that these systems are usually created with the objective to address the information needs of a variety of stakeholders. And, as just noted, the conditions that need to be met to ensure the information’s relevance and credibility tend to be relatively specific to each type of user. For example, when a system’s M&E activities are meant to address the information needs both of line ministries or agencies and of one or more central ministries, there are at least two alternatives. On the one hand, the control of the M&E processes can be entrusted to a central ministry, in which case the information’s credibility may be ensured for all the stakeholders but its relevance, usefulness and acceptability to line ministries and agencies may be rather limited. On the other hand, the higher the level of control that line ministries or agencies exert over the processes, the more likely it is that the information produced is relevant to their needs but, given the direct stake that they have in the activities being evaluated, the M&E findings’ credibility may suffer. In short, there is an underlying tension between the conditions required to ensure appropriate levels of information relevance and credibility to different stakeholders (Zaltsman, 2006). As the discussion below will show, there are different ways to address and reconcile these potentially conflicting requirements, but they usually involve significant trade-offs. The eight M&E systems examined in this report have been set up and remain under the control of Executive Branch institutions. In the case of Argentina’s PFMS, Chile’s MCS, and Uruguay’s SEV, the system coordinating units report to institutions that are directly responsible for the budget formulation process (such as Argentina’s ONP and Chile’s DIPRES) or, at the very least, play an 18 important role in it (such as Uruguay’s OPP).11 In Costa Rica, SINE depends on the ministry that is in charge of national planning (i.e., the Ministry of Planning, or MIDEPLAN), while in Colombia the institution that controls SINERGIA (the National Planning Department, or DNP) is responsible for both national planning and the formulation of the national investment budget.12 The coordinating unit of Argentina’s SIEMPRO reports to an inter-institutional commission (the National Council for the Coordination of Social Policies) made up of all the ministries that run anti-poverty programs. Finally, in the case of Argentina’s RBMS and Chile’s SSPG, the system coordinating units report to central government institutions with inter-ministerial coordination functions (the Chief of the Cabinet Office and SEGPRES, respectively). Although, based on their stated objectives, all the systems are expected to help hold governments accountable, Costa Rica’s SINE is the only one where a supreme audit institution (SAI) that is independent of the Executive Branch participates in the definition of the M&E agenda (see below). In Colombia, SINERGIA’s authorities have plans to engage civil society organizations in the analysis and dissemination of the information that the system produces, which may result in the system becoming subject to social control. Since the operation of the monitoring and evaluation components involves fairly distinct steps and processes, the rest of this section will treat them separately. The Systems’ Monitoring Activities As Table 8 shows the roles that the systems’ monitoring activities assign to their most immediate stakeholders appear to follow some common patterns. More specifically, except for Costa Rica’s SINE, where this function is overseen by an inter-ministerial committee, the definition of the basic elements of the M&E agenda always lies with the institution that is ultimately responsible for each system. The development of the systems’ methodologies is always in the hands of their central coordinating unit. The definition of the indicators and targets that the systems base their assessments on involve, to a greater or lesser extent, the participation of both the assessed programs and institutions and the central coordinating units, and the information that feeds into the system is always provided by the assessed programs and institutions themselves. Finally, save for Argentina’s RBMS, where the participating agencies play a much more leading role in this regard, it is always also the coordinating unit which is in charge of issuing the final performance assessments. The greatest differences across systems revolve around the level of leadership and control that they assign to their coordinating units and the assessed programs and institutions in the definition of the indicators and targets. On the one hand, some systems appear to be more concerned with ensuring the standardization and the impartiality of the process and, therefore, assign the coordinating unit a much more decisive role in this regard. Chile’s MCS is a good case in point, as it is probably the system where the relationship between the coordinating unit and the assessed agencies follows the most vertical approach. More specifically, in addition to defining the 11 In Uruguay, the responsibility for formulating the National Budget lies with the General Accounting Office. But the process also involves the OPP’s active participation at different stages. In this regard, one of the OPP’s core functions is to assist the National Accounting Office in the analysis of agency budget requests and, when necessary, in their adaptation to the broader government plan and resource availability. 12 One important thing to note, though, is that, for several years, the coordination between the division that runs SINERGIA and the unit responsible for developing the Investment Budget was extremely poor. 19 overall performance monitoring agenda, the system’s coordinating unit exerts a closer oversight role throughout the entire process than in any of the other M&E systems. Table 8: Distribution of Roles Involved in Monitoring Activities Functions Argentina Chile Colombia Costa Rica Uruguay PFMS SIEMPRO RBMS SSPG MCS SINERGIA SINE SEV Defines CU CU CU CU CU CU CU CU method- ologies Provides CU CU CU CU CU CU COM CU training & TA Identifies CU CU CU & CU & CU CU & MIN COM ACY issues to be ACY ACY assessed (CCC) ACY (SIG) Proposes PGM PGM CU & ACY ACY MIN ACY ACY indicators & ACY (PI) ACY (CCC) CU ACY (PMG) (SIG) Decides on CU CU & CU & CU & CU CU & MIN CU ACY indicators PGM ACY ACY (CCC) ACY (SIG) Proposes PGM PGM ACY ACY ACY MIN ACY ACY targets Decides on PGM PGM CU & CU, & CU (PI) CU & MIN CU ACY targets ACY ACY CU & (CCC) Oth ACY (PMG) (SIG) Provides the PGM PGM ACY ACY ACY MIN ACY ACY data & ACY Audits data --- CU (…) CU n/a CU CU (…) --- CU (…) (CCC) Analyzes CU CU CU & CU CU CU CU CU data and ACY prepares (CCC) reports ACY (SIG) Negotiates --- --- CU CU CU --- --- --- actions to be (CCC) (PMG) taken Legend: CU = system’s coordinating unit; PGM = program; ACY = agency; MIN = ministry; PI = Performance Indicators component; PMG = Management Improvement Program; COM = inter-institutional committee; Oth = other institutions; (…) = conducted in a very rudimentary manner; n/a = not available. 20 On the other hand, other systems seem to give higher priority to the sense of ownership and receptiveness to the M&E findings by senior officials in the assessed programs and agencies, rather than to the external credibility of the information produced. Therefore, they provide these officials a greater level of involvement in this part of the process. Among the eight systems discussed in this paper, Argentina’s RBMS is the one that ensures the greatest involvement to the line agencies. In effect, one of its monitoring components leaves the definition of the aspects of performance to be assessed entirely up to the evaluated agencies while, in the other, except for the definition of the basic methodologies and the verification of the data (which are both conducted by the coordinating unit) and the proposal of performance targets (which is up to the agencies themselves to make), all the other steps of the process are undertaken on the basis of a joint effort between the line agency and the coordinating unit’s experts. This also includes the assessment of agency performance, the preparation of the final monitoring reports, and may also include the joint preparation of an action plan. In the rest of the M&E systems, the distribution of roles between the coordinating unit and the assessed programs and agencies appears to lie somewhere in between the tight external oversight characteristic of Chile’s MCS, and the more relaxed and horizontal relationship that exists between Argentina’s RBMS’s coordinating units and the participating agencies. In the six cases, it is the assessed programs and agencies which propose both the indicators and targets. However, in Chile’s SSPG, Colombia’s SINERGIA, and Uruguay’s SEV the programs and agencies appear to play a greater role in defining whether the indicators will be used than in the other systems. On the other hand, in Chile’s SSPG, Colombia’s SINERGIA and Costa Rica’s SINE, the coordinating units seem to have a greater level of involvement in the definition of the programs’ and agencies’ performance targets than in the other three systems. The available information on receptiveness and utilization of these systems’ monitoring findings is rather scarce and, for the most part, merely anecdotal. Nevertheless, it is worth noting that the system where the use of monitoring findings is most clearly documented is Chile’s MCS – that is, the system where the coordinating unit exerts the tightest control over the assessment process, and not one of those that concede the line agencies greater leverage. The most prevalent obstacle for the assimilation and eventual use of monitoring findings by line programs and agencies appears to originate from the lack of commitment and involvement on the part of their senior staff. For the most part, it is common for all the activities associated with the monitoring process to remain concentrated on the line agency organizational unit that acts as a liaison with the system’s coordinating unit. Consequently, the level of organizational awareness of the programs’ or institutions’ performance targets and assessments tends to be extremely low. This would seem to be true regardless of whether the coordinating unit’s counterpart at the agency was specifically created to deal with this task or already existed and performs other functions (e.g., planning, budgeting, etc). In an attempt to secure the commitment of the ministries and departments to the negotiated performance targets, some systems, like Argentina’s RBMS and Colombia’s SINERGIA, require that the process that results in the definition of those targets begins with high level negotiations between the two parties. On the other hand, in Chile’s MCS, what appears to attract the high-level attention of program and agencies to the assessment process and findings is the importance that the system’s powerful institutional sponsor (DIPRES) assigns to them, and a concern that those performance assessments may end up impacting their budget allocations. 21 The Systems’ Evaluation Activities In the context of the evaluation components of the three systems that undertake this type of assessment, the distribution of roles among the different parties involved is relatively more uniform (Table 9). In all cases, the decision as to what policies, programs or organizations will be evaluated, and the type of evaluation approach to apply are defined by the system’s sponsoring institution and/or some other entity independent of the programs or institutions to be evaluated. In the three systems, these critical decisions lie with more than one single actor. More specifically, in the case of Argentina’s SIEMPRO, the National Council for the Coordination of Social Policies (NCCSP) comprises all the ministries responsible for anti-poverty programs. In the case of Chile’s MCS, the decision is shared between DIPRES and Congress, whereas in Colombia, the Inter-Sector Evaluation Committee is made up of several central ministries.13 Except for SINERGIA, which engages the relevant line ministries in the definition of the type of evaluation to undertake, the evaluated institutions are completely excluded from these first two critical decisions. Table 9: Distribution of Roles Involved in Evaluation Activities Functions Argentina Chile Colombia SIEMPRO MCS SINERGIA Selection of programs or National Council for the DIPRES and Congress Inter-Sector Evaluation agencies to be evaluated Coordination of Social and Results-Based Policies Management Committee Definition of evaluation Coordinating unit Coordinating unit Coordinating unit & approach relevant ministry Financing of the National Council for the DIPRES Evaluated programs, evaluations Coordination of Social ministries that run them Policies & DNP Undertaking of the External evaluators External evaluators External evaluators evaluations Supervision of the Coordinating unit Coordinating unit, inter- Coordinating unit evaluation process ministerial committee, evaluated agencies (*) Negotiation of actions to n/a Coordinating unit & n/a be taken evaluated agencies (*): The evaluated agencies’ role consists of providing feedback on the intermediate and final evaluation reports. Reference: n/a: Not available. Although it is up to the systems’ coordinating unit to define the evaluations’ basic methodological approaches, to ensure the independence of the evaluations, their actual undertaking is always commissioned to external consultants or institutions that are selected through open and public bidding processes. SIEMPRO and MCS finance these evaluations with their own resources, while SINERGIA co-finances them with the evaluated programs and the ministries responsible for them. SIEMPRO and SINERGIA entrust the supervision of the evaluation process to their coordinating units. In the case of MCS, the system’s coordinating unit oversees the process very closely but, in 13 In fact, the Inter-Sectoral Committee also includes representatives from the ministries whose policies or programs are to be evaluated. However, those ministries only join after the decision on what policies or programs to evaluate has been made. 22 addition, there are two more actors involved. One of them is an inter-ministerial committee comprising the Presidency, and the ministries of Finance and Planning, which is also in charge of ensuring that the evaluations’ development is consistent with the government’s policies, that the necessary technical support and coordination are available, and that the evaluations’ conclusions are passed on to the affected agencies. The second actor involved is the evaluated agencies and programs themselves, which (a) provide evaluators with the information that they need; (b) in the case of EPGs and CSRs, prepare the logframe that serves as a basis for the evaluation process; and (c) in the three types of evaluation alike, are given the possibility to react to both the intermediate and final evaluation reports. Finally, the MCS requires that, at the end of the evaluation process, the coordinating unit and the evaluated agencies engage in formal negotiations to define the specific ways and timeline within which the agency will implement the evaluation’s recommendations. 6.3 Information Flows and Reporting Arrangements The systems have organized their information flows in a number of ways. To characterize their various arrangements, this section revolves around three issues. The first is the way in which each system has organized the different steps that precede the preparation of their M&E reports. Since the steps that these processes involve are specific to the type of M&E activity undertaken, this part of the discussion focuses on the monitoring and the evaluation processes separately. The second issue includes the reporting arrangements and overall dissemination strategy that the systems employ to ensure that the information they produce reaches their various stakeholders. The third issue is the incentives to use this M&E information, and the actual extent of utilization. Process that Precedes Preparation of Monitoring Reports The process that precedes the preparation of the systems’ monitoring reports can be conceptualized as consisting of four-steps. The first step includes the identification of the indicators and performance targets that serve as a basis for assessing policies, programs and agencies. Notwithstanding the differences in the relative level of control that the coordinating unit and the assessed institutions hold in this part of the process, it entails the first important exchange of information between these two parties, which prepares the ground for the subsequent stages. The second step concerns the dissemination of these performance targets. The third step involves obtaining the information required by system to produce its assessments, and the fourth, the procedures that the systems employ, if any, to ensure the quality and credibility of this information. Submission of performance indicators and targets proposals: Argentina’s PFMS, Chile’s MCS, Costa Rica’s SINE and Uruguay’s SEV require that the assessed agencies and programs submit their indicator and target proposals as part of (or attached to) their budget requests. In most systems, these proposals are generally submitted through standardized forms that, in the cases of SINE, SEV, and the RBMS’s SIG, also collect information on the institutions’ mission, goals, strategic objectives and operational plans. In the context of SINE, these forms also require an organizational diagnosis of the institutions’ strengths and weaknesses. In Argentina’s SIG, Chile’s two systems, and Uruguay’s SEV, the assessed agencies submit all this information electronically. SINE plans to adopt a similar information submission procedure shortly. As noted above, at least in some of the systems (e.g., Argentina’s PFMS, Uruguay’s SEV, and possibly Argentina’s SIEMPRO as well), the identification of indicators and targets is generally 23 undertaken with very little or no involvement on the part of the programs’ or agencies’ most senior officials, which most likely reduces their relevance to the operations of the agencies and programs. Colombia’s SINERGIA and Argentina’s RBMS are trying to avoid this problem by requiring that the standards that serve as a basis for the performance assessments are defined through top-level negotiations between the assessed institutions and the M&E system authorities. Naturally, for this type of requirement to be enforced, the M&E systems need to possess sufficient institutional clout, which is usually a function of the level of commitment and power of their institutional sponsor. Dissemination of the performance targets: After the M&E system coordinating units review and approve these proposals, the agency and program performance targets are agreed and, in most cases, publicized. For the most part, the main means of public dissemination is the coordinating unit’s web site. Provision of monitoring information: As already noted, the information that feeds the M&E system is provided, in all cases, by the line programs or institutions themselves. In Chile’s two systems, Colombia’s SINERGIA, Uruguay’s SEV and, at least to some degree, in Argentina’s RBMS, the information reaches the coordinating unit through intranet or internet systems. Costa Rica’s SINE is planning to reorganize this part of the process around a similar system shortly. In the case of Argentina’s PFMS, the information on compliance with physical output targets is delivered in the form of printed reports. The information on compliance with financial targets is provided through an electronic intranet system. Control of data quality and credibility: Data auditing is often far from systematic and, in some of the systems, is not even a regular practice. Chile’s MCS and Argentina’s CCC program conduct randomized quality checks. In the specific case of the MCS, when the agency or program concerned is considered to be of high public impact, these consistency checks extend to all the information that the system receives from them. In other systems, like Argentina’s SIEMPRO and Colombia’s SINERGIA, data quality controls are somewhat less methodical while they are not currently conducted on a regular basis in Argentina’s PFMS, Costa Rica’s SINE, and Uruguay’s SINE. Process that Precedes Preparation of Evaluation Reports In the context of the M&E systems’ evaluation components, the information flow cycle is somewhat different. In addition to the information that they obtain from the evaluated agencies and programs, evaluation studies rely frequently on ad-hoc interviews and surveys and other sources to obtain the data they need. In the specific case of the EPG and CSR components of Chile’s MCS, the evaluation cycle begins by asking the evaluated program or agency to provide some basic information following a standardized format. The EPG component requires programs to prepare their own logframe matrix. This matrix contains details on the program’s goal, the general and specific objectives of each component, the program’s main activities and performance indicators, and assumptions. As part of the CSR component, the evaluated agencies are required to prepare “preliminary evaluation matrices” containing details on: government priorities that they intend to address; their mission, strategic objectives, and organizational structure; strategic outputs and outcomes associated with each specific objective; etc. In both cases, the matrices are later assessed and, if necessary, adjusted by the evaluators, who use them as a basis for the entire evaluation process. In the case of impact evaluations ― the MCS system’s third evaluation component ― the information that evaluators require from the evaluated programs is more complex and could not readily be summarized in a 24 standardized format. This information is collected by a range of methods, depending on the nature of each evaluation. Even so, the information that the evaluated programs and agencies provide remains a fundamental input to the evaluation process. Once the evaluators complete their studies, they submit a final report to the system’s coordinating unit. For Chile’s MCS, evaluators submit first a preliminary version of their evaluation reports to the coordinating unit and to the program or agency; the latter, in turn, review these reports closely and provide comments to the evaluators. Based on this feedback, the evaluators deliver a final report, which is also sent to the evaluated programs or agencies, to give them the opportunity to express their reaction. These responses are ultimately added to the evaluation report in the form of a written statement. Reporting Arrangements and Dissemination Strategies The systems’ M&E findings are always conveyed through different types of reports. At least in some of the cases, the contents of these reports are tailored to the specific information needs of the intended reader. For example, Argentina’s PFMS produces quarterly reports on each of the evaluated programs to be submitted to the program managers, their agencies and ministries, the ONP’s authorities, and other divisions of the Secretariat of Finance. An annual report is submitted to the National Accounting Office, containing more abridged information on all the programs. This information is in turn used as a basis for preparing the Investment Account report through which the Executive Branch reports to the Congress concerning its execution of government programs. In all cases, the documents are made publicly available. Similarly, Chile’s SSPG produces quarterly and annual reports containing information on the entire government’s performance for the President; and ministry and agency-specific reports are prepared for ministry and agency heads. A summarized version of all this information is disseminated through the MCS’s Comprehensive Management Reports (see below). In some of these systems, information on the compliance of evaluated institutions with their performance targets can be consulted through intranet and internet systems, which provide the various stakeholders with different levels of access. This is the case with Argentina’s RBMS SIG, Chile’s two systems, Colombia’s SINERGIA, and Uruguay’s SEV. The intranet system that Costa Rica’s SINE is planning to launch in the near future will also serve this function. The internet system that SINERGIA uses (known as SIGOB)14 gives citizens partial access. For the time being, access to the RBMS SIG is restricted to the evaluated agencies’ officials and to certain other officials, but there are plans to make it partially accessible to the general public. Some of the systems have begun experimenting with reader-friendly report formats. This is being done in an attempt to overcome the difficulties that many of the intended information users (citizens, legislators, policy-makers, public managers) have had in understanding and making use of the original reports, which they found exceedingly lengthy and written in too technical a language. Thus, in the last two or three years, Colombia’s SINERGIA and Uruguay’s SEV have begun relying on different types of bulletins and booklets that are written in very plain language, and make extensive use of graphs. Similarly, for several years now, Chile’s MCS has concentrated much of its M&E information in its Comprehensive Management Reports (BGIs), which are more reader-friendly than the system’s individual performance assessment reports. In addition, MCS attaches executive summaries to all its final evaluation reports. 14 SIGOB stands for System of Presidential Targets’ Programming and Management. 25 Finally, in the five countries, the internet serves as the main channel of public dissemination. In addition, some of the systems also rely on other dissemination media. For example, in Argentina, the RBMS CCC program requires participating agencies to publicize their performance targets and assessments themselves, and it evaluates the agencies’ efforts in this regard as part of its agency assessments. In Colombia, the President and the members of his cabinet take part in an annual TV program known as “telecast ministry councils”,15 and in weekly townhall meetings around the country, in the context of which they respond to citizens’ questions on the government’s policy results. Finally, both in Colombia and in Costa Rica, M&E findings are also publicized through press conferences. The Use of M&E Information For the most part, the extent to which M&E findings are being used in all these M&E systems remains unclear. A study on the national M&E systems of Chile, Colombia, Costa Rica and Uruguay conducted between 2001 and 2002 (Cunill Grau and Ospina Bozzi, 2003), found that most of the systems’ stakeholders were making very limited use of the information. In 2004, a case study of Argentina’s three M&E systems reported similar findings (Zaltsman, 2004). However, since these studies were undertaken, there have been reports that, in some of these countries, M&E findings are beginning to influence decision-making. Sound evidence comes from Chile, where a World Bank (2005) review of the MCS evaluation components found ― as have the anonymous internet surveys conducted by DIPRES16 ― that most of the stakeholders consulted (DIPRES budget analysts and section heads; ministry and agency budget officials; program authorities; etc) reported the information was being used as an input for decision making. Most of the systems have intended to foster the use of M&E information and performance improvement by establishing budgetary or institutional incentive mechanisms, but few have succeeded in operationalizing them. The system that has accomplished most in this regard is Chile’s MCS, which has set up a variety of incentives targeted both at the evaluated programs and agencies, and at the Ministry of Finance. These incentives include: (a) the introduction of the so-called “Institutional Commitments”, which are formal pledges through which evaluated agencies commit to implement the evaluation recommendations within a given timeline; (b) the regular monitoring of agencies’ compliance with these Institutional Commitments, as well as with their targets under the PI and PMG initiatives; (c) the explicit requirement for agencies and ministries to justify their budget requests with information on past and planned performance; and (d) the explicit requirement that M&E information is used as part of the internal and external discussions that take place during the budget formulation process.17 In this sense, at least part of the success of the MCS can be attributed to the committed support that it has received from the powerful DIPRES over many years. Most of the other systems have not been able to achieve this level of support; where these other systems have succeeded in creating incentives, they have lacked the political leverage required to enforce them. This is the case of the Program Agreement component of Argentina’s RBMS, where a law of 1999 and a decree of 2001 enabled the 15 In Spanish, “Consejos Televisados de Ministros”. 16 See, for example, DIPRES (2004). 17 The internal discussions take place before DIPRES defines and communicates the annual budgetary baselines to ministries and agencies. These internal discussions involve the national budget director, DIPRES’s budget section heads, and the MCS coordinating unit officials. The discussions focus on the analysis of the financial and performance information available for each agency and program. The external discussions involve meetings between officials from DIPRES and from the agencies and ministries whose budgets are being determined. These bilateral meetings are held after ministries have received their budget baselines and have submitted their proposals to DIPRES. 26 Chief of Cabinet to use financial and institutional incentives to encourage good organizational performance. However, after the key officials who sponsored the creation of the system left the government, the entire initiative lost impetus, and the component’s incentives were never enforced. Moreover, within two years, the component itself ceased operation. A second example can be found in Uruguay’s SEV where, after a first frustrated attempt in 1995, the 2000-2004 Budget Law instituted a series of financial rewards for good institutional performance. However, these incentives never materialized because of fiscal constraints and insufficient political support. In short, in most cases, the main incentive for assessed agencies and programs to pay attention to and make use of this information is the fact that their performance is now being measured and tracked, and the resulting assessments are circulated both within government and publicly. It can be argued, however, that the effectiveness of this type of incentive is highly sensitive to the degree of dissemination and the visibility of the systems’ performance assessments. 6.4 Linkage Between M&E and Budgeting Very frequently, the integration of M&E information into the budget decision-making process is hindered by the lack of an appropriate budget classification (Joyce and Sieg, 2000). That is typically the case when the budget is completely organized around objects of expenditure and does not specify the objectives or intended outcomes that each budget allocation is meant to finance. But, as some of the cases below show, program budget classifications, in and of themselves, do not achieve an appropriate connection between the two types of information. Argentina is one of the three countries included in this report with a program budget classification. However, the connection between M&E findings and the budget allocations is still difficult to attain, at least for two reasons. First, it is rather common for the budget’s structure not to reflect the programs’ actual production processes accurately: many programs are included as subprograms, as activities of other programs, or completely merged under larger programs. Secondly, most M&E activities focus on federal programs, and the functioning of many of them involves the use of human and material resources that are financed by provincial and local governments. Since sub-national governments’ expenditures are not included in the national budget, the information on the program expenditures that it contains is far from complete. In addition, agencies have a very short time-span to prepare their budget requests, which hinders the appropriate connection between their financial programming and their physical output plans. This, in turn, is exacerbated by the fact that coordination between the program authorities (who are in charge of developing physical output plans) and the budget divisions of the agencies that run the programs (who bear responsibility for the financial programming) is rather poor. By contrast, in the other two countries with program-based budget classifications (Uruguay and Costa Rica), the linkage between the M&E system performance assessments and budget allocations is much clearer. Before implementing SEV, Uruguay’s government redefined the public sector’s organizational structures so that each program would be ascribed to one single agency. This allows the budget to identify the expenditures associated with the attainment of the different program objectives without losing track of the organizational responsibilities over them. Moreover, since the beginning of the 2000s, agency budget requests are required to specify the amount of resources that they plan to assign to the pursuance of each specific performance target, which facilitates the connection between the information that the SEV produces and the budget. 27 Costa Rica adopted a programmatic budget classification after 2001. Since then, cooperation between the ministry responsible for planning (MIDEPLAN) and the institutions in charge of formulating the budget (Ministry of Finance and the Comptroller General’s Office) has enhanced the coordination between the two processes. As in Uruguay, agency budget requests take the form of strategic and operational plans that specify the amount of resources that they plan to assign to the pursuance of each goal and target. This allows budget decision-makers to weigh the alternative possible outputs of the financial resources that they are to assign. SINE’s M&E findings inform them about the extent to which the targets that agencies propose in their strategic and operational plans are being met in practice. For both Uruguay’s SEV and Costa Rica’s SINE, the greatest challenge facing the link between the M&E system and the national budget lies in ensuring that the indicators that serve as the basis for their monitoring schemes represent the assessed agency and program performance effectively, and that the cost estimates that they rely upon are sufficiently accurate. Colombia’s national budget follows a line-item classification, which limits the potential for establishing a clear link between budget allocations and the M&E information that SINERGIA produces. As already noted, SINERGIA’s assessments focus on the performance of specific policies and programs. In 2004, the DNP submitted a reform bill to Congress proposing the adjustment of the Organic Budget Statute so that, in addition to the functional, economic and accounting classifications that it employs today, the national budget would adopt a program classification. This bill has not been approved by Congress, however. Nevertheless, the DNP has already begun moving in this direction. Since 2004, DNP has prepared the national investment budget bill using two parallel budget classifications: the legally approved one, and a newly developed “results-based” one. For the latter, most budget allocations have one or more performance indicators attached and, in all cases, they are linked to the pursuance of one or more of the National Development Plan’s strategic objectives. On the other hand, the current expenditure budget, which is prepared by the Ministry of Finance and represents a greater share of the national budget, is still being formulated according to the traditional line-item classification. Like Colombia, Chile’s budget is organized around a line-item classification by agency; this includes details on only some of the agencies’ program allocations. A recent change in the budget classification has increased the number of programs identified in the budget, and the implementation of the integrated financial management system known as SIGFE18 will soon allow the intended outcomes and specific allocations to be linked much more clearly. But for the time being, the relationship between intended outcomes and budget allocations remains elusive. Nevertheless, the MCS M&E findings are better integrated into the budget process than those of any of the other systems examined in this paper. To some extent, this is facilitated by the fact that most of the performance information that MCS produces follows the same level of aggregation as the budget. More specifically, PMGs, CSRs, and PIs concentrate on agency performance and, in the specific case of PIs, many of the performance assessments can also be linked to specific agency expenditure items. On the other hand, the programs that the system evaluates with its EPGs or IEs are generally not identified in the budget as such. Therefore, the DIPRES evaluators and the budget coordinating units take care to ensure that the budget estimates are carefully linked to the evaluation findings when the budget bill and budget law are prepared. 18 The acronym SIGFE stands for Information System for the State’s Financial Management. 28 Another factor that appears to be critical in the success of MCS in integrating M&E information into the budget formulation process is the committed support that it receives from its powerful institutional champion, DIPRES. In short, the experience of these five countries suggests that, while in principle, program-based budget classifications should be able to maximize the benefits of M&E information for budget decision-making purposes, simply having such a program classification does not produce performance-based budget decision-making. On the other hand, Chile’s experience demonstrates that, when the determination to integrate performance considerations into the budget process comes from the highest levels of government, this can be achieved even in the absence of program-based budgeting. 29 7. FINAL REMARKS AND LESSONS DRAWN The similarities and contrasts that emerge from the comparative analysis of these eight government M&E systems suggest a number of valuable lessons, which are presented below. Institutional Configuration of the M&E Function In Colombia, Costa Rica, and Uruguay, the M&E function is organized around a single system which, at least in principle, seems to leave them in a good position to ensure consistency among the different processes that the function entails. On the other hand, in Chile and in Argentina, the function is currently configured around two and three different systems, respectively. The implications of this type of institutional arrangement for each of the two countries are quite different though, which makes the comparison between their experiences especially revealing. In Argentina, although the three systems focus on quite distinct (and therefore, potentially complementary) aspects of public sector performance, they operate in a totally uncoordinated manner, and with nearly no points of connection with each other. This fragmented approach represents a lost opportunity: it has not been possible to use the information from the different systems in a complementary, synergistic manner. Moreover, because these uncoordinated systems require ministries, agencies and programs to respond to multiple information requests, an unnecessary burden is imposed on them, which is most likely to conspire against the quality of the information they provide and the likelihood that they will end up using it. On the other hand, the two systems that exist in Chile have been functioning in an increasingly congruent manner. This suggests that the fact of having the M&E function structured in more than one system, in and of itself, is not necessarily an impediment to its effective operation. What appears to have made the difference between Argentina and Chile is that, in the latter, the two initiatives are grounded on a higher-level overarching vision that enjoys the committed support of powerful institutional sponsors. In Argentina, the only time when two of the initiatives came close to being coordinated was during a brief period when the Vice-President’s Office championed cooperation between the institutional sponsors of these initiatives. Shortly after the Vice-President left office, however, that cooperation came to an end, and the development of the two initiatives ended up following different paths. Approaches to M&E The most prevalent type of performance assessment practice across the systems is indicator-based monitoring, which all eight systems conduct. Only Argentina’s SIEMPRO, Chile’s MCS, and Colombia’s SINERGIA also include evaluation components. This provides these three systems with a wider range of options than the other five systems to adjust the level of depth of their performance assessments to the specific type of information need that they are trying to address. Performance monitoring, in and of itself, represents a relatively crude way to inform decision-making. In many cases, there is a need for a much more nuanced, in-depth understanding of the processes involved in particular programs or policies, which evaluations are much better equipped to provide. Given the specific strengths and weaknesses of monitoring and evaluation, which are potentially complementary to each other, the ideal approach is one that relies on an appropriate balance between the two types of activity. Chile’s MCS provides a good example of how this can be done in practice. 30 The system includes two performance monitoring, and three evaluation, components ― each of which is centered on different aspects of organizational and program performance. The monitoring information is used as one of the factors to consider when deciding on which agencies and programs the evaluations will focus. Moreover, one of the evaluation components relies on relatively short, less costly and less sophisticated studies that, besides providing valuable performance information, are taken as a basis to determine the possible need for larger-scale, rigorous impact evaluations. The Relevance of M&E Information to M&E System Stakeholders One of the best ways to ensure the relevance of M&E information to the needs of their intended users is by engaging them in the definition of what policies, programs and aspects of performance will be subject to monitoring and evaluation. Moreover, the greater their level of involvement in that first and essential stage of the process, and in the subsequent ones, the higher their sense of ownership and their likely receptiveness to the M&E findings. A challenge that M&E systems usually face is in achieving a high level of participation by all the stakeholders whose information needs the systems are meant to address. The way in which the systems have been dealing with this issue varies from one case to another but, in general, it has entailed significant trade-offs. For example, Colombia’s SINERGIA is intended to serve the information needs of: (a) the National Planning Department, to inform its national planning activities and the formulation of the investment budget; (b) line ministries, to support the design and management of programs; and (c) the President’s Office, Congress, audit institutions and society in general, to enhance transparency and accountability. To ensure the relevance of the system’s evaluations to its various stakeholders, the decision on what specific programs and policies to evaluate has been left in the hands of an inter- institutional committee that includes representatives from the Presidency, the National Planning Department, and the National Budget Bureau (in the Ministry of Finance). However, the committee leaves several stakeholders outside these critical decisions: it does not include representatives from Congress, audit institutions, or civil society organizations. In the case of the line ministries that are responsible for the programs to be evaluated, the committee assigns them a role in helping the DNP decide on the type of evaluation to be conducted. However, they do not participate in the selection of the programs that will be subject to evaluation nor in the subsequent stages of the process. Argentina’s RBMS relies on a different approach. One of its components leaves the definition of the aspects of performance, indicators and targets to be monitored, to the agencies themselves, whereas the second component demands a high level of participation from the two main intended users of the assessments that it produces: the Chief of Cabinet’s Office, represented by the system’s coordinating unit, and the evaluated agencies themselves. Thus, the monitoring cycle engages both parties in the definition of the performance aspects to be assessed, the identification of indicators, the assessment of the agencies’ performance, and the preparation of the final assessment reports. The expectation is that each of these steps of the process will be undertaken on a consensual basis. The system requires that the overall performance standards and targets which are set will be agreed through high-level negotiations between the agencies and the Chief of Cabinet’s Office. The Systems’ Impartiality and Credibility It is widely considered necessary ― to ensure the credibility of M&E findings ― for the M&E system activities to be conducted with some degree of independence from the agencies and programs being evaluated. This is particularly important when the intended users of the M&E findings are external to the policy, agency or program being assessed. However, when M&E findings are 31 primarily targeted towards the agents responsible for the evaluated activities, ensuring a high level of involvement and receptiveness on their part becomes more important than the information’s external credibility. The systems examined have relied on several strategies to ensure the impartiality of the assessments they conduct. One strategy used by all three evaluation systems it to contract out the evaluations to external consultants or institutions selected through open and public bidding processes. In the case of monitoring activities, all the systems have reserved at least some of the process’s most sensitive steps to their coordinating units which, in all cases, are independent from the agencies and programs whose performance is being assessed. These steps include, in all cases: decision on what activities to assess, the systems’ basic methodologies and, except for Argentina’s RBMS, also the analysis of the data gathered and the final assessments. As noted above, in the case of the RBMS CCC program, the last step of the monitoring cycle is conducted jointly between the system coordinating unit and the assessed agencies, whereas in its SIG component it is totally left up to the agencies themselves. Although the managers of all the systems acknowledge the importance of auditing the information that they receive from the assessed agencies and programs, not all of them do so in a systematic way. At least three of them ― Argentina’s PFMS, Costa Rica’s SINE and Uruguay’s SEV ― currently do not audit the information on a regular basis, while those where data quality controls are done most methodically ― i.e., Chile’s MCS and Argentina’s RBMS CCC program ― perform them on a random basis. One of the stated objectives of most of these M&E systems is enhancing public sector transparency and accountability, yet the control of all these systems always lies with Executive Branch institutions. Moreover, except for Costa Rica’s SINE, none of them assigns supreme audit institutions (SAIs) that are independent of the Executive Branch any kind of role in M&E processes. SINERGIA’s coordinating unit, in Colombia, has plans to engage civil society organizations in the analysis and dissemination of the system’s findings, and this has been conceived as another way to reinforce the system’s credibility. Reporting Arrangements and Dissemination Strategies The existence and availability of M&E information does not guarantee that the intended users will actually use it. The systems examined in this paper have been trying to facilitate and encourage utilization in various ways. One approach consists of tailoring the reporting arrangements according to the expected needs of each type of user. This includes the preparation of different types of report for different audiences. Some of these reports focus on each of the assessed programs or agencies separately, while others present in the one document a less detailed overview of all the performance assessments which have been conducted. Other aspects by which reports are tailored to their intended users are the frequency with which the reports are issued, the complexity of the language with which they are written, and their format. It is equally important that the key stakeholders are aware of the availability of this information and have easy access to it. For example, in many agencies there is a widespread lack of awareness of the organization’s performance targets and assessments. It appears that this is usually the result of poor internal communication. This problem is often aggravated by the concentration of all of the agency’s 32 M&E functions in a single organizational unit, which may not communicate well with the agency’s senior management. In order for M&E systems to serve as a public accountability and transparency instrument, it is important that the information that the systems produce is publicly visible and easily accessible. The internet is used as a primary dissemination channel in the M&E systems considered in this paper. The internet offers the advantage of facilitating access to information but it is less effective as a means to achieve public awareness of the existence of this information. Several systems therefore rely on other dissemination strategies. For example, in addition to disseminating an abridged version of this information through its website, Argentina’s RBMS CCC program requires participating agencies to publicize their performance targets and assessments themselves, and it assesses agency efforts in this regard as one of the dimensions that it considers as part of its broader agency appraisals. In Colombia, the President and the members of his cabinet take part in an annual TV program, and in weekly townhall meetings, in which they respond to citizens’ questions on the government’s policy results. Finally, in both Colombia and in Costa Rica, M&E findings are also publicized through press conferences. Most of the systems have tried to encourage the use of M&E information and achieve improvements in performance by establishing budgetary or institutional incentive mechanisms, but few have succeeded. In most cases, the main incentive for the different stakeholders to pay attention to and make use of this information is the fact that performance is now being measured and tracked, and the resulting assessments are circulated both within government and publicly. The system that has been able to advance most on this front is Chile’s MCS. This system has set up a variety of incentives. These include: the requirement that the agencies responsible for the evaluated programs make a formal commitment to implement the evaluation’s recommendations; the close monitoring of agency compliance with these commitments and with the performance targets which they have agreed; and the institutionalization of utilization of M&E findings during the budget negotiations and preparation. The contrast between Chile’s MCS experience and that of some of the other M&E systems suggests that the committed support of a powerful institutional sponsor, as the MCS has from DIPRES, may be essential ― not only to design these incentives, but also to be able to enforce them. Linkage Between the M&E System and the Budget One of the most common obstacles to integrating M&E findings into the budget process is the lack of correspondence between the intended outcomes of agencies and programs, and the budget classification (which is generally organized by agency and type of expenditure). One way to address this disconnect is to adopt a program- or objective-based budget classification, and some of the countries in this sample have done this. In contrast, Chile’s budget is still largely organized, around a line-item classification by agency. Nevertheless, the MCS M&E findings are arguably much better integrated into the budget process than those of the other systems examined in this paper. To some extent, this is facilitated by the fact that most of the performance information that MCS produces follows the same level of aggregation as the budget (i.e., agency level). For the specific programs that are evaluated, the DIPRES evaluators and budget coordinating units link the evaluation findings and the budget estimates for individual agencies and activities. Another important factor has been the committed support that the MCS enjoys from its powerful institutional champion, DIPRES. In addition to having consistently 33 supported the system’s development, the senior managers of DIPRES have clearly and consistently signaled their determination to incorporate M&E considerations into the preparation of the budget. Implementation Strategies and Subsequent Developments The implementation of all the M&E systems followed a relatively gradual approach, with the exception of Chile’s SSPG and Uruguay’s SEV. Implementation typically began with a series of pilots in a small number of agencies, and was only extended to the remaining agencies and programs after the M&E methodologies and procedures were judged to be sufficiently robust. Participation in the M&E systems was initially voluntary, and only became mandatory after a period of years. In all cases, the implementation of the M&E systems involved an important capacity-building effort on the part of the M&E system’s coordinating unit, which involved provision of training, technical assistance, and other on-going support. In some of the cases, the systems as they exist today resemble their original design very closely. In other cases, however, the final form of the systems had little in common with the original plan. Two factors may have been behind these unforeseen developments. One was the evolution of the systems’ political environment, which usually entailed changes in the government’s priorities in the level of political support. The second factor was a growing understanding of which elements of the M&E system were working as intended, and which were not. In some of the systems (e.g., Chile’s MCS, Colombia’s SINERGIA, and Argentina’s RBMS), this learning process was supported by the periodic undertaking of diagnostic studies and reviews, which were commissioned from outside experts or conducted internally. Legal Framework The M&E systems and their various components were established through a range of legal instruments. In some of them, there was an explicit decision to rely, firstly, on relatively more malleable legal instruments (decrees, protocols of agreement between the Executive and Congress; some mechanisms were sanctioned through the national budget law on an annual basis). Sanctioning by law was left for later on in the process, after their methodologies and procedures had attained a greater level of maturity. In general, the systems’ legal frameworks define their objectives, functions, the responsibilities of the various parties involved in the process and, in most cases, also the types of M&E activities to be conducted. However, they all leave the definition of the systems’ specific procedures and methodologies to the central institutions that are in charge of them. 34 Annex List of People Interviewed or Consulted Argentina National Budget Office’s Physical and Financial Monitoring System (PFMS) Name Position Means of communication Date Diana Boeykens Director, Budget Evaluation Personal interview Aug 2004 Bureau, ONP E-mail exchange June 2005 and Feb 2006 Marcos Makón Former under-secretary of Personal interview Dec 2003 and Finance (at the time when the Jan 2004 PFMS was created) Roberto Martirene Former national budget director; Personal interview Aug 2004 and current advisor to the ONP Sept 2004 Miguel Angel Former national budget director; Personal interview Sept 2004 Bolivar current advisor to the ONP Juan Pablo Becerra Former consultant, PFMS, Budget Personal interview Sept 2004 Evaluation Bureau, ONP E-mail exchange Sept 2005 System of Information, Monitoring and Evaluation of Social Programs (SIEMPRO) Name Position Means of communication Date Beatriz General coordinator, SIEMPRO E-mail exchange June 2005 Toutoundjian Nerio Neirotti Former evaluation and monitoring Personal interview Jan 2003 manager, SIEMPRO Mabel Ariño, Senior researcher, SIEMPRO Personal interview Jan 2003 Miriam Sebban Member of staff, SIEMPRO Telephone interview Jan 2004 Gabriel Martínez Consultant, SIEMPRO Personal interview Sept 2004 E-mail exchange Feb 2006 Results-Based Management System (RBMS) Name Position Means of communication Date Marcos Makón Former secretary of Personal interview Dec 2003 and Modernization (at the time when Jan 2004 the RBMS’s components were launched) Carmen Sycz Director, National Office of Personal interview Jan 2003 and Public Management Innovation Sept 2004 (division in charge of the RBMS), E-mail exchange Sept 2005 and Under-Secretariat of Public Feb 2006 Management, Chief of Cabinet’s Office Eduardo Halliburton General coordinator, CCC program Personal interview Dec 2003 35 Chile Management Control System (MCS) Name Position Means of communication Date Marcela Guzmán Director, Management Control Phone interview Dec 2004 and Division, DIPRES Feb 2005 Personal interview June 2005 Phone interviews Feb 2006 Luna Israel Member of staff, Management E-mail exchange (*) Oct 2004 Control Division, DIPRES (*) E-mail exchange conducted in the context of a project for the Latin American Center for Development Administration’s (CLAD) Integrated and Analytical System of Information on State Reform, Management and Public Policies’ (SIARE) web site. Monitoring System of Governmental Priorities (SSPG) Name Position Means of communication Date Francisco Morales Member of staff, Division of E-mail exchange (*) Oct 2004 Inter-ministerial Coordination, SEGPRES (*) E-mail exchange conducted in the context of a project for the Latin American Center for Development Administration’s (CLAD) Integrated and Analytical System of Information on State Reform, Management and Public Policies’ (SIARE) web site. Colombia Public Management Results Evaluation System (SINERGIA) Name Position Means of communication Date Manuel Fernando Director, Public Policy Phone interview Oct 2004(*) Castro Evaluation Bureau, National and May 2005 Planning Department Ana María Fernández Consultant, Public Policy Phone interview Mar 2006 Evaluation, National Planning Department (*) Phone interview conducted in the context of a project for the Latin American Center for Development Administration’s (CLAD) Integrated and Analytical System of Information on State Reform, Management and Public Policies’ (SIARE) web site. Costa Rica National Evaluation System (SINE) Name Position Means of communication Date Florita Azofeifa Coordinator, MIDEPLAN’s E-mail exchange (*) Oct 2004 Monge Evaluation and Monitoring Division José A. Calvo Coordinator, MIDEPLAN’s E-mail exchange June 2005 and Evaluation and Monitoring Feb 2006 Division (*) E-mail exchange conducted in the context of a project for the Latin American Center for Development Administration’s (CLAD) Integrated and Analytical System of Information on State Reform, Management and Public Policies’ (SIARE) web site. 36 Uruguay Results-Based Management Evaluation System (SEV) Name Position Means of communication Date Elizabeth Nuesch Coordinator, Public E-mail exchange Oct 2004(*), Management Division, CEPRE, May 2005 and OPP Feb 2006 (*) E-mail exchange conducted in the context of a project for the Latin American Center for Development Administration’s (CLAD) Integrated and Analytical System of Information on State Reform, Management and Public Policies’ (SIARE) web site. 37 BIBLIOGRAPHY Armijo, M. (2003): ‘La Evaluación de la Gestión Pública en Chile’. In N. Cunill Grau and S. Ospina Bozzi (eds.) Evaluación de Resultados para una Gestión Pública Moderna y Democrática: Experiencias Latinoamericanas. Caracas: CLAD (Centro Latinoamericano de Administración Para el Desarrollo). Babino, L.G. y A.J. Sotelo (2003) ‘A Diez Años de la Reforma de la Administración Financiera Gubernamental en la Argentina. Análisis y Reflexiones’. In Revista Internacional de Presupuesto Público, No.53 (November - December). Asociación Internacional de Presupuesto Público (ASIP). Becerra, J.P. (2003) ‘Mecanismos de Control y Evaluación Presupuestaria’. Municipalidad de Florencio Varela - Fundación Capital (mimeo). Braceli, O. (1998) Los Límites a la Evaluación de Políticas Públicas. El Presupuesto y la Cuenta de Inversión Nacional. Serie Cuadernos 260, Facultad de Ciencias Económicas de la Universidad Nacional de Cuyo, Mendoza. Chelimsky, E. (1985) ‘Old Patterns and New Directions in Program Evaluation’, in E. Chelimsky (ed.) Program Evaluation: Patterns and Directions. Washington, D.C.: The American Society of Public Administration. _____ (1997) ‘The Coming Transformations in Evaluation’. In E. Chelimsky and W.R. Shadish (eds.) Evaluation for the 21st Century: A Handbook. Thousand Oaks: Sage Publications. Consejo Nacional de Política Económica y Social (2004) Renovación de la Administración Pública: Gestión por Resultados y Reforma del Sistema Nacional de Evaluación. Documento CONPES 3294. Presidencia de la República: Alta Consejería Presidencial.Versión aprobada. Bogotá. Contaduría General de la Nación (2002) Cuenta de Inversión 2002. Subsecretaría de Presupuesto, Secretaría de Hacienda, Ministerio de Economía, Buenos Aires. Cunill Grau, N. and Ospina Bozzi, S. (eds.) (2003): Evaluación de Resultados para una Gestión Pública Moderna y Democrática: Experiencias Latinoamericanas. CLAD, AECI/MAP/FIIAPP, Caracas. Dirección de Calidad de Servicios y Evaluación de Gestión (2002a) Aportes para una Gestión por Resultados. Estándares e Indicadores de Servicios. Programa Carta Compromiso con el Ciudadano, Subsecretaría de la Gestión Pública, Jefatura de Gabinete de Ministros, Buenos Aires. _____ (2002b) Participación Ciudadana en la Administración Pública. Oficina Nacional de Innovación de Gestión, Subsecretaría de la Gestión Pública, Jefatura de Gabinete de Ministros, Buenos Aires. División de Control de Gestión (2004) ‘Programa de Evaluación. Cálculo de Indicador “Porcentaje de Presupuesto Evaluado en Relación al Presupuesto Evaluable”’. Dirección de Presupuestos, Ministerio de Hacienda, Santiago. _____ (2005): ‘Implicancias de las Evaluaciones y sus Efectos Presupuestarios, Años 2000 - 2004’. Documento de Trabajo. Dirección de Presupuestos, Ministerio de Hacienda, Santiago. 38 Dirección de Planeamiento y Reingeniería Organizacional (2002) ‘El Sistema de Gestión por Resultados: Documento conceptual’. Oficina Nacional de Innovación de Gestión, Subsecretaría de Gestión Pública, Buenos Aires (internal document). Dirección de Presupuestos (2004) ‘Encuesta Resultados Intermedios/Finales ― Evaluación de Programas’. Ministerio de Hacienda (mimeo). Santiago _____ (2005) Aplicación de Instrumentos de Evaluación de Desempeño: La Experiencia Chilena. Ministerio de Hacienda, Santiago. DNP-DEE (1997) ‘Grupos Focales: Evaluación Herramienta Plan Indicativo. Informe Global’. Santafé de Bogotá. Estévez, A.M. and G.E. Blutman (2004) ‘El modelo burocrático inacabado después de las reformas de los 90: Funcionarios, gerentes o sobrevivientes?’. In Revista Venezolana de Gerencia, Vol.9 No.25, Maracaibo. Fonseca Sibaja, A. (2001) ‘El Sistema Nacional de Evaluación: Un Instrumento para la Toma de Decisiones del Gobierno de Costa Rica’. XV Concurso de Ensayos del CLAD “Control y Evaluación del Desempeño Gubernamental”, Third Price. Caracas, Venezuela. Freijido, E. (2003) ‘El Sistema de Evaluación de la Gestión Pública por Resultados en Uruguay’. In N. Cunill Grau and S. Ospina Bozzi (eds.) Evaluación de Resultados para una Gestión Pública Moderna y Democrática: Experiencias Latinoamericanas. Caracas: CLAD. Grupo de Trabajo Interministerial (1998) ‘Institucionalidad para un Sistema Integrado de Evaluación de Intervenciones Públicas’. Ministerio de Hacienda, Segpres, Mideplan. Final report (mimeo), Santiago. Guerrero, R.P. (1999): Comparative Insights from Colombia, China and Indonesia. Operations Evaluation Department ECD working paper no.5. Washington, D.C.: Operations Evaluation Department, The World Bank. Guzmán, M. (2003) Systems of Management Control and Results-Based Budgeting: The Chilean Experience. Management Control Division, National Budget Office, Ministry of Finance, Santiago. _____ (2005): Sistema de Control de Gestión y Presupuestos por Resultados: La Experiencia Chilena. División de Control de Gestión, Dirección de Presupuestos, Ministerio de Hacienda, Santiago. Halliburton, E. and Baxendale, P. (2001): Programa Carta Compromiso con el Ciudadano: Guía para su Implementación. Subsecretaría de la Función Pública. Oficina Nacional de Innovación de la Gestión, Buenos Aires. _____, R. Fiszelew, R., M.I. Alfaro a and Petrizza, E. (2002) Participación Ciudadana en la Administración Pública. Dirección de Calidad de Servicios y Evaluación de Gestión. Oficina Nacional de Innovación de Gestión, Buenos Aires. _____ and Guerrero, G. (2002) Aportes para una Gestión por Resultados. Estándares e Indicadores de Servicios. Programa Carta Compromiso con el Ciudadano. Dirección Nacional de Calidad de Servicios y Evaluación de Gestión. Subsecretaría de la Gestión Pública, Buenos Aires. 39 Hardy, C. (2000) Redefinición de las Políticas Sociales y su Relación con la Gestión de los Programas. Proyecto de Reforma del Estado: Experiencias y Desafíos en América Latina. Estudio de Caso No.11 (Informe Final). Centro de Análisis de Políticas Públicas, Universidad de Chile ― Banco Interamericano de Desarrollo. Hatry, H.P. (1997) ‘Where the Rubber Meets the Road: Performance Measurement for State and Local Public Agencies’. In K.E. Newcomer (ed.) Using Performance Measurement to Improve Public and Nonprofit Programs. San Francisco: Jossey-Bass Publishers. Joyce, P.G. and S. Sieg (2000) ‘Using Performance Information for Budgeting: Clarifying the Framework and Investigating Recent State Experience’. Prepared for the 2000 Symposium of Center for Accountability and Performance of the American Society for Public Administration, held at the George Washington University, Washington, D.C. Llosas, H. and H. Oliver (2003) ‘La Cultura de la Evaluación en la Ley Argentina de Administración Financiera. El Caso de la Prefectura Naval Argentina’. Departamento de Economía, Universidad Católica Argentina, Buenos Aires. Mackay, K. (2006) Institutionalization of Monitoring and Evaluation Systems to Improve Public Sector Management. Independent Evaluation Group ECD working paper no.15. Washington, D.C.: Independent Evaluation Group, The World Bank. Makón, M.P. (2000) ‘El Modelo de Gestión por Resultados en los Organismos de la Administración Nacional’. Paper presented at V Congreso Internacional sobre la Reforma del Estado y de la Administración Pública, Santo Domingo. Mokate, K. (2000) ‘El Monitoreo y la Evaluación: Herramientas Indispensables de la Gerencia Social’. Diseño y Gerencia de Políticas Sociales. Banco Interamericano de Desarrollo. Instituto Interamericano para el Desarrollo Social (mimeo). Mora Quirós, M. (2003) ‘El Sistema Nacional de Evaluación de Costa Rica’. In N. Cunill Grau and S. Ospina Bozzi (eds.) Evaluación de Resultados para una Gestión Pública Moderna y Democrática: Experiencias Latinoamericanas. Caracas: CLAD. Muñoz G., M. (2005) ‘Evaluación de Resultados en Chile: El Sistema de Seguimiento de la Programación Gubernamental’. Unpublished paper, CLAD. Neirotti, N. (2000) ‘Reflexiones sobre la Práctica de la Evaluación de Programas Sociales en Argentina (1995 - 1999)’. Paper presented at V Congreso Internacional sobre la Reforma del Estado y de la Administración Pública, Santo Domingo. _____ (2001) ‘La Función de Evaluación de Programas Sociales en Chile, Brasil y Argentina’. Paper presented at VI Congreso Internacional del CLAD sobre la Reforma del Estado y de la Administración Pública, Buenos Aires. Newcomer, K.E. (1997) ‘Using Performance Measurement to Improve Programs’. In K. Newcomer (ed.) Using Performance Measurement to Improve Public and Nonprofit Programs. San Francisco: Jossey-Bass Publishers. OECD (Organisation for Economic Co-operation and Development) (1998) Best Practice Guidelines for Evaluation. PUMA policy brief no. 5. Paris: OECD. 40 _____ (2004) Budgeting in Chile. GOV/PGC/SBO (2004)7. 25th Annual Meeting of Senior Budget Officials, Madrid. Oficina Nacional de Innovación de Gestión (2001) Programa Carta Compromiso con el Ciudadano: Guía para su Implementación. Secretaría para la Modernización del Estado, Buenos Aires. Ospina Bozzi, S. (2001) ‘Evaluación de la Gestión Pública: Conceptos y Aplicaciones en el Caso Latinoamericano’. In Revista del CLAD Reforma y Democracia. No. 19 (February), Caracas. _____, N. Cunill Grau and A. Zaltsman (2004) ‘Performance Evaluation, Public Management Reform and Democratic Accountability: Some Lessons from Latin America’. In Public Management Review, Vol. 6 No. 2. ____ and Ochoa, D. (2003) “El Sistema Nacional de Evaluación de Resultados de la Gestión Pública (SINERGIA) de Colombia”. In N. Cunill Grau and S. Ospina Bozzi (eds.) Evaluación de Resultados para una Gestión Pública Moderna y Democrática: Experiencias Latinoamericanas. Caracas: CLAD. Petrei, H. (1998) Budget and Control: Reforming the Public Sector in Latin America. Inter-American Development Bank, Washington, D.C. Ramírez Alujas, A. (2001) Modernización de la Gestión Pública: El Caso Chileno (1994 - 2000). Universidad de Chile, Facultad de Ciencias Físicas y Matemáticas, Departamento de Ingeniería Industrial. Estudio de Caso Nº 58. Santiago. _____ (2002) ‘Innovación en la Gestión Pública: Lecciones, Aprendizajes y Reflexiones a Partir de la Experiencia Chilena’. Paper presented at VII Congreso Internacional del CLAD sobre la Reforma del Estado y de la Administración Pública, Lisbon. Razzotti, A. (2003) Estudio de Caso: El Proyecto Cristal (Argentina). Paper written for PREM Public Sector Group, World Bank, in the context of the “E-Applications: Overcoming Challenges Inside Government" project (mimeo). Rinne, J. (2003) ‘The Politics of Administrative Reform in Menem’s Argentina: The Illusion of Isolation’. In Ross Schneider, B. and B. Heredia (eds.) Reinventing Leviathan: The Politics of Administrative Reform in Developing Countries. North-South Center Press, University of Miami. Rodríguez Larreta, H. and F. Repetto (2000) Herramientas para una Administración Pública más Eficiente: Gestión por Resultados y Control Social. Fundación Gobierno y Sociedad, Documento 39, Buenos Aires. Rossi, P.H. and H.E. Freeman (1993) Evaluation: A Systemic Approach. Newbury Park, California: Sage Publications. SIEMPRO (undated) ‘El Monitoreo Estratégico de los Programas Sociales Nacionales ― SIM. Propuesta Metodológica’ Buenos Aires. SINERGIA (2004a) Programa Familias en Acción: Condiciones Iniciales de los Beneficiarios e Impactos Preliminares. Evaluación de Políticas Públicas No.1, Departamento Nacional de Evaluación, Colombia. 41 _____ (2004b) Programa Empleo en Acción: Condiciones Iniciales de los Beneficiarios e Impactos de Corto Plazo. Evaluación de Políticas Públicas No.2, Departamento Nacional de Evaluación, Colombia. _____ (2004c) Red de Apoyo Social: Conceptualización y Evaluación de Impacto. Evaluación de Políticas Públicas No.3, Departamento Nacional de Evaluación, Colombia. Subsecretaría de la Gestión Pública (2003a) Informe de Gestión 2002-2003. Jefatura de Gabinete de Ministros, Buenos Aires. _____ (2003b) ‘Síntesis de la Evaluación del Programa Carta Compromiso con el Ciudadano’. Unidad Coordinadora del Programa, Dirección de Calidad de Servicios y Evaluación de Gestión, Oficina Nacional de Innovación de Gestión, Subsecretaría de la Gestión Pública, Buenos Aires. Subsecretaría de Presupuesto (undated) El Sistema Presupuestario Público en la Argentina. Secretaría de Hacienda, Ministerio de Economía y Obras y Servicios Públicos, Buenos Aires. U.S. General Accounting Office (1998) Performance Measurement and Evaluation. Definitions and Relationships. GAO/GGD-98-26, Washington, D.C. Wholey, J.S. and K.E. Newcomer (1997) ‘Clarifying Goals, Reporting Results’. In K.E. Newcomer (ed.) Using Performance Measurement to Improve Public and Nonprofit Programs. San Francisco: Jossey-Bass Publishers. World Bank (1997) Colombia: Paving the Way for a Results-Oriented Public Sector. Report No. 15300-CO, Country Operations Division I, Country Department III, Latin America and Caribbean Region. Washington D.C. _____ (2005) Chile: Study of Evaluation Program. Impact Evaluations and Evaluations of Government Programs. Final Report, Executive Summary. The World Bank: Washington, D.C. _____ and Inter-American Development Bank (2006) Towards the Institutionalization of Monitoring and Evaluation Systems in Latin America and the Caribbean. Proceedings of a World Bank/IADB Conference. The World Bank: Washington D.C. Zaltsman, A. (2004) ‘La Evaluación de Resultados en el Sector Público Argentino: Un Análisis a la Luz de Otras Experiencias en América Latina’. In Revista del CLAD Reforma y Democracia, No.29. _____ (2006): ‘Credibilidad y Utilidad de los Sistemas de Monitoreo y Evaluación para la Toma de Decisiones: Reflexiones en Base a Experiencias Latinoamericanas.’ In M. Vera (ed.) Evaluación para el Desarrollo Social: Aportes para un Debate Abierto en América Latina. Instituto Interamericano para el Desarrollo Social, Banco Interamericano de Desarrollo. Magnaterra Editores, Ciudad de Guamemala. 42 Other Papers in This Series #1: Keith Mackay. 1998. Lessons from National Experience. #2: Stephen Brushett. 1998. Zimbabwe: Issues and Opportunities. #3: Alain Barberie. 1998. Indonesia’s National Evaluation System. #4: Keith Mackay. 1998. The Development of Australia’s Evaluation System. #5: R. Pablo Guerrero O. 1999. Comparative Insights from Colombia, China and Indonesia. #6: Keith Mackay. 1999. Evaluation Capacity Development: A Diagnostic Guide and Action Framework. #7: Mark Schacter. 2000. Sub-Saharan Africa: Lessons from Experience in Supporting Sound Governance. #8: Arild Hauge. 2001. Strengthening Capacity for Monitoring and Evaluation in Uganda: A Results Based Management Perspective. #9: Marie-Hélène Adrien. 2003. Guide to Conducting Reviews of Organizations Supplying M&E Training. #10: Arild Hauge. 2003. The Development of Monitoring and Evaluation Capacities to Improve Government Performance in Uganda. #11: Keith Mackay. 2004. Two Generations of Performance Evaluation and Management System in Australia. #12: Adikeshavalu Ravindra. 2004. An Assessment of the Impact of Bangalore Citizen Report Cards on the Performance of Public Agencies. #13: Salvatore Schiavo-Campo. 2005. Building Country Capacity for Monitoring and Evaluation in the Public Sector: Selected Lessons of International Experience. #14: Richard Boyle. 2005. Evaluation Capacity Development in the Republic of Ireland. #15: Keith Mackay. 2006. Institutionalization of Monitoring and Evaluation Systems to Improve Public Sector Management. 43 Other Recommended Reading Operations Evaluation Department (OED). 2004. Evaluation Capacity Development: OED Self- Evaluation. OED. 2002. Annual Report on Evaluation Capacity Development. OED. 2004. Influential Evaluations: Evaluations that Improved Performance and Impacts of Development Programs. OED. 2005. Influential Evaluations: Detailed Case Studies. OED. 2004. Monitoring and Evaluation: Some Tools, Methods and Approaches. 2nd Edition. Independent Evaluation Group (IEG). 2006. Conducting Quality Impact Evaluations Under Budget, Time and Data Constraints, forthcoming. Development Bank of Southern Africa, African Development Bank and The World Bank. 2000. Developing African Capacity for Monitoring and Evaluation. K. Mackay and S. Gariba (eds.) 2000. The Role of Civil Society in Assessing Public Sector Performance in Ghana. OED. Other relevant publications can be downloaded from IEG’s ECD Website: >http://www.worldbank.org/ieg/ecd/< 44