49445 Core Course: Economic Growth Ray C. Rist WBI Evaluation Unit World Bank Mary Cusick Advisor Evaluation Analyst (202) 458-5625 (202) 243-3241 Number 33 May 1999 Araujo, Jorge The Economic Policy and Poverty Reduction Group of WBI (WBIEP) conducted a core course on Economic Growth in Washington, D.C. from January 20-21, 1999. This is the third offering of this course; previous offerings were held in Washington, D.C. in February and June, 1998 (This course has been offered in different forms in other countries at previous times. This evaluation reviews the three D.C. offerings due to the similarities among the courses). The two-day course was aimed at strengthening participants' analytical and research skills in the field of economic growth. The course was designed to provide both a theoretical framework for and empirical studies of the determinants of economic growth, as well as an understanding of policy issues for achieving equitable and sustainable growth. Thirty-two participants attended this course, all World Bank staff. The course included 12 women, 37.5% of all participants. The course was evaluated by the WBI Evaluation Unit (WBIES) using a Level 1 (participants' reaction) end-of-course evaluation questionnaire. The questionnaire was completed by 21 respondents, 65.6% of the total number of participants. The questionnaire asked respondents to give their views about course relevance, benefits, design and trainers. The results of the questionnaire were used to compare this course to the previous two courses. Each of the previous courses had 51 participants, including individuals from Partner Institutions and World Bank economists. In contrast, this offering had 32 participants, all World Bank staff. Additionally, unlike those used in the two previous offerings, the questionnaire did not measure participants' self-assessed learning gains. A 5-point Likert type scale that ranged from 1 = minimum to 5 = maximum was used to rate respondents' answers to the questions in each section. Following are the major evaluation findings. l As in previous offerings, the overall usefulness of the course exceeded 3.0 on a 5- point scale, achieving a mean score of 4.4 for this indicator. l For all questions on the relevancy of the course, respondents indicated that the course was most relevant in meeting their personal learning needs. The mean score for this indicator was 4.6; 94.7% of respondents chose either 4 or 5 on the 5-point scale for this indicator. It is worth noting that this score was higher than that of the two previous offerings, and that this indicator has had the highest mean score for each of the three D.C. offerings. l Ratings on the effectiveness of trainers reflect the consistent quality of presenters for this course. Ninety-five percent of respondents selected 4 or 5 out of 5 for overall quality of the trainers' answers to the participants' questions, (mean=4.5). Ninety-one percent selected 4 or 5 for overall effectiveness of the trainers in communicating their messages (mean=4.4). l The mean scores for the indicators on course design ranged from 3.9 (usefulness of the training materials for you up to now) to 4.4 (effectiveness of the course in maintaining your interest during its full duration). Participants felt strongly that the discussions were constructive and that the presentations were useful (mean=4.3 for each). With the exception of usefulness of the training materials for you up to now, all of the course design indicators scored higher than they did in the previous two offerings. l Participants were asked to respond to four additional course design indicators using a different 5-point scale. This scale ranges from 1=insufficient to 5=excessive. The midpoint for this scale, 3=adequate, is the optimal score. For each of these indicators, a majority of the respondents selected the "adequate" category. While only 55% of the respondents felt that the group was adequate with regard to the diversity of participants, 30% of respondents held no opinion on this indicator. In contrast to the previous offerings, each of which included participants from Partner Institutions as well as Bank staff, this course targeted a more homogeneous group (Bank staff). A question regarding participant diversity may be inappropriate for a course which specifically targets a homogeneous group. This may explain the large number of respondents who held no opinion on participant diversity. l As in the past, indicators measuring the extent to which the course helped participants better assess various policy alternatives had ratings relatively lower than course indicators on relevance, design and trainers. Mean scores ranged from 3.6 to 3.8 (61.9% to 66.7% of respondents selected 4 or 5 out of 5 for these indicators). Respondents' increase in familiarity with policies that have worked well in some cases had a mean score of 3.8. This was the highest mean score for this group of indicators. In previous offerings it has been the lowest mean score among the same group of indicators. One question asked respondents to rate the adaptability to their situation of the policies discussed. Respondents' mean score for this indicator was 3.6, the lowest mean score among all course performance indicators. The task managers are advised to continue working on the issue of course benefits with regard to policy options in the design stage. l The pattern among course indicators has not changed significantly between offerings. The following charts A, B and C illustrate the pattern among groups of indicators for the three offerings (February 1998, June 1998 and January 1999). As can be seen, collective mean scores for "relevance", "trainers" and "overall" indicators tend to be the highest, while collective means for "benefits" indicators are consistently the lowest. Again, the task managers may wish to focus on the benefits of the course with regard to the extent to which the course helped participants better assess various policy options. Chart A: Collective mean scores for groups of indicators in the February, 1998 offering. Chart B: Collective mean scores for groups of indicators in the June, 1998 offering. Chart C: Collective mean scores for groups of indicators in the January, 1999 offering. l When considering that this offering had higher mean scores on many indicators than did the previous offerings, it is important to note the change in the size and composition of the group of participants in this offering. The course was smaller in size by roughly 37%. Some of the indicators which may have logically been affected by course size are the extent to which the discussions were constructive, the overall effectiveness of the trainers in communicating their message, and overall quality of the trainers' answers to the participants questions. Table 1, shown below, compares the mean scores for these indicators across all three offerings. As can be seen, the most recent course, which was significantly smaller, yielded impressively higher mean scores for the selected indicators. We should be cautious in interpreting these results. There was also a rise in mean scores between the February and June offerings (no change in course size). However, a higher gain was seen from June to January 1999, ranging from .23 to .43 points (the rise in these mean scores from February to June ranged from only .02 points to .07 points). February 98 June 98 January 99 offering offering offering Course size: 51 51 32 Extent to which the 3.85/(34) 3.87/(38) 4.3/(21) discussions were constructive Overall effectiveness 4.15/(34) 4.17/(36) 4.4/(21) of the trainers in communicating their message Overall quality of the 4.12/(34) 4.19/(36) 4.5/(20) trainers' answers to the participants' questions l A criterion for judging the effectiveness of WBI training is for at least 85 percent of respondents to give a satisfaction rating of 4 or 5 on a 5-point scale. The fact that so many of the indicators used in this questionnaire scored above this 85 percent mark is promising. Regarding the overall usefulness of the course, in this offering 95.2 percent of respondents selected a 4 or 5. l A Level II measure of participant knowledge was not used in this evaluation. Task managers are advised to continue to track self-assessed learning gains, as well as actual learning gains using cognitive testing (where applicable), in the future.