79472 AUTHOR ACCEPTED MANUSCRIPT FINAL PUBLICATION INFORMATION Knowledgeable Bankers? The Demand for Research in World Bank Operations The definitive version of the text was subsequently published in Journal of Development Effectiveness, 5(1), 2013-03-08 Published by Taylor and Francis THE FINAL PUBLISHED VERSION OF THIS ARTICLE IS AVAILABLE ON THE PUBLISHER’S PLATFORM This Author Accepted Manuscript is copyrighted by the World Bank and published by Taylor and Francis. It is posted here by agreement between them. Changes resulting from the publishing process—such as editing, corrections, structural formatting, and other quality control mechanisms—may not be reflected in this version of the text. You may download, copy, and distribute this Author Accepted Manuscript for noncommercial purposes. Your license is limited by the following restrictions: (1) You may use this Author Accepted Manuscript for noncommercial purposes only under a CC BY-NC-ND 3.0 Unported license http://creativecommons.org/licenses/by-nc-nd/3.0/. (2) The integrity of the work and identification of the author, copyright owner, and publisher must be preserved in any copy. (3) You must attribute this Author Accepted Manuscript in the following format: This is an Author Accepted Manuscript of an Article by Ravallion, Martin Knowledgeable Bankers? The Demand for Research in World Bank Operations © World Bank, published in the Journal of Development Effectiveness5(1) 2013-03-08 http://creativecommons.org/licenses/by-nc-nd/3.0/ © 2013 The World Bank Knowledgeable Bankers? The Demand for Research in World Bank Operations Martin Ravallion1 World Bank 1818 H Street NW, Washington DC, USA 20433 mravallion@worldbank.org Abstract: Development impact calls for knowledgeable development practitioners. How then do the operational staff of the largest development agency value and use its own research? Is there an incentive to learn and does it translate into useful knowledge? A new survey reveals that the bulk of the World Bank’s senior staff value the Bank’s research for their work, and most come to know it well, although a sizable minority have difficulty accessing research to serve their needs. Another group sees little value to research for their work and does not bother to find out about it. Higher perceived value is reflected in greater knowledge about research, though there are frictions in this process. Staff working on poverty, human development and economic policy tend to value and use Bank research more than staff in the more traditional sectors of Bank lending—agriculture and rural development, energy and mining, transport and urban development; the latter sectors account for 45 percent of lending but only 15 percent of staff highly familiar with Bank research. Without stronger incentives for learning and more relevant and accessible research products, it appears likely that this lag in demand for research by the traditional sectors will persist. Keywords: World Bank research, incentive to learn, absorptive capacity, development aid 1 The author is grateful to Asli Demirguc-Kunt, Marianne Fay, Aart Kraay, Tamar Manuelyan Atinc, Will Martin, Berk Ozler, Mike Toman, Dominique van de Walle and Adam Wagstaff for useful comments. However, the author alone is responsible for the interpretations of the survey data provided in this paper. In particular, the findings, interpretations and conclusions of this paper should not be attributed to the World Bank or any affiliated organization. 1. Introduction In trying to understand the impact of development aid, much attention has been given to the role played by the circumstances of recipient countries, notably whether their policy environment and governance are conducive to high social returns from aid.1 The literature has given less attention to the role played by the staff and managers in donor organizations. In the case of the World Bank, there have been concerns about the Bank’s “lending culture,” which tends to reward operational staff for the volume of their lending, with (it is argued) too little weight given to the quality of lending.2 In one of the few studies of aid effectiveness to look at these issues, Denizer, Kaufmann and Kraay (2011) confirm that the “macro” variables for recipient countries are relevant to the (subjectively but independently assessed) quality of the Bank’s development projects. But they also find that the bulk of the variance in the quality of the Bank’s lending operations is within countries rather than between them. The quality of the staff in charge of projects on the donor’s side matters at least as much as the policy environment on the recipient’s side. The knowledge of operational staff within donor agencies is likely to be important to the quality of development aid. There is evidence that the stock of prior analytic work by the Bank on a recipient country is a strong predictor of the subsequent quality of its lending operations to that country (Deininger, Squire and Basu, 1998; Wane, 2004) and that the quality of prior analytic work matters to the quality of its projects (Fardoust and Flanagan, 2011). The stock of practitioners’ knowledge depends on their demand for knowledge, as well as its supply. Thus the incentives for learning within aid organizations have been identified as one factor in the quality of development aid (Wane, 2004; Ravallion, 2011). The generation of relevant knowledge and its diffusion within donor agencies is a poorly understood factor in development effectiveness. While research is clearly a key element of knowledge generation and innovation, any large and complex organization like the World Bank will face challenges in assuring that relevant basic research is both produced and used. As the literature on organizations has emphasized, a key factor is the ability to exploit the internal division of labor to support innovation (Hage, 1999). The role of a designated “research department” in bringing together internal research capabilities and in absorbing new external information to assure innovation has long been recognized in the literature (following, in particular, Cohen and Levinthal, 1990). But 2 having such a department does not guarantee that practitioners will have the incentive to learn from research or that the research produced will serve their needs. In practice, there may well be frictions within an organization that inhibit learning even when the incentive is there. The World Bank (WB) has a department dedicated to research, the Development Research Group (DECRG), within the Development Economics Vice-Presidency (DEC), as well as researchers scattered across other units. Research accounts for roughly 1% of Bank staff; there are about 75 full-time research staff in DECRG and probably another 25 or so full-time equivalent researchers outside DECRG.3 DECRG spans all sectors of the Bank’s work. Its research is almost invariably empirical and typically relies on economics as the primary discipline, though increasingly drawing on insights from other social sciences. By objective criteria, the scale and quality of its research on development economics exceeds that of virtually all other international agencies and universities.4 DECRG aims in part to serve the needs of its operational units, which provide development lending and policy advice to developing countries.5 However, we know very little about the demand for the Bank’s research among the organization’s operational staff. How familiar are they with Bank research? How much do they rely on it for their work? How do the answers to these questions vary across units and sectors of the Bank? We know even less about the incentives for learning among the Bank’s operational staff. Those incentives depend crucially on the value attached to WB research by staff for their work. Do the Bank’s practitioners value research for their work, and (if so) does this incentive to learn translate into greater familiarity and use of the Bank’s research? While these questions clearly matter to the Bank’s effectiveness as a development aid organization, we know almost nothing about the answers. This paper tries to throw new light on the demand for research by development practitioners within the World Bank. A key empirical question is how operational staff are mapped into the four groups identified in Table 1. If the Bank’s operational staff highly value Bank research for their work, and the research done is relevant to their needs and accessible to them, then the bulk of staff will fall into the category “functionally well-informed.” For this group, the incentive to learn comes with acquired knowledge. There may also be staff who are well-informed about research, but not because it matters to their work; they are “independently 3 well-informed.” And some value research for their work but face hurdles in accessing it, so their familiarity is low; these are the “frustrated uninformed.” The remainder comprise those who do not see much value to Bank research for their work, and so do not seek it out; they can be said to be “voluntarily uninformed.” The paper uses a specially commissioned survey of the World Bank’s senior operational staff to determine how they are mapped to the four groups in Table 1. The paper’s analysis of these data aims to better understand the demand for Bank research by the Bank’s practitioners— reflecting both their incentives to learn and the responsiveness of the supply of research to their needs. 2. The survey World Bank staff typically work in (overlapping) teams assigned to specific projects, each team having a “Task Team Leader” (TTL) in the Bank’s terminology. The population for the survey comprised all “senior staff,” defined as those at grades “GG” and above (almost all “GH” and “GI”), which account for about one-quarter of all staff. They are all likely to be the TTLs of one or more operations. The survey was only sent to GG+ staff excluding DEC.6 The survey was implemented using a confidential, web-based, survey tool.7 On top of basic information on Vice-Presidential Unit (VPU), “sector” (also called “network”) and years of Bank service, the survey asked a series of questions on familiarity with WB research, sources of knowledge about that research, and the value of WB research. The survey had little practical choice but to rely on subjective self-assessments. Psychological biases cannot be ruled out in such data, such as if WB staff over-state their familiarity with research because they like to feel good about themselves and their work. There were 2,900 recipients of the survey instrument and 555 responses. There was only a modest variation in response rates across grades, namely 18%, 21% and 15% for grades GG, GH and GI+ respectively (as compared to 19% overall).8 More worrying is that the response rate varied markedly across VPUs, as can be seen in Table 2. The lowest response rate was for the Middle East and North Africa (MNA) at 12%, while the highest was for Poverty Reduction and Economic Management (PRM) at 50%. To the extent that such differences in response rates are correlated with perceptions of research there will be a bias for drawing inferences about the 4 means for the population of the Bank’s senior staff, although correlations and regressions are likely to be more robust. 3. Knowledge about research among the Bank’s operational staff Learning about research findings is not costless. To be well-informed, practitioners must spend precious time searching out and studying research products. The costs for doing so will vary across staff, as will the expected benefits. The survey asked a number of questions about familiarity and use of World Bank research. The key question was as follows: “How familiar would you say you are with WB research products/services on a scale from 1 to 10 where 1 means not familiar at all and 10 means extremely familiar?”9 Figure 1 gives two density functions for the responses, one for the raw data and one with regression adjustment for years of service at the WB (and its squared value) and the sector and Vice-Presidency of the respondent. The densities are fairly symmetric around a mode and median of 6, and mean of 5.74, with a standard error of 0.10. To interpret an average score of 6 we should recall the fact that researchers are a small proportion of the Bank’s staff count—somewhere around 1%. To help interpret any given mean score, let us assume that—through their writings, presentations and personal contacts—the 100 or so researchers influence an average number of n Bank staff to become “very familiar” with research over the relevant time period. Of course there is an overlap, in that the same operational staff come back again and again. So n should be interpreted as the mean net gain attributed to the influence of individual researchers. To keep this simple, let us also assume that the rest of the staff know nothing about research. So the researchers are able to make 100n staff very familiar— a score of 10 on the scale. There are about 2,900 staff in the relevant population. Clearly, to achieve the top score of 10, researchers would need to bring n=29 staff on average up to a 10. What then will be the value of n implied by an average score of s? It is readily verified that the answer is 3.22(s-1). So a mean score of 6 requires n =16; individual researchers would have made an extra 16 Bank staff (on average) “very familiar” with WB research. To get a mean score of 8 they would need to reach 23 Bank staff; by contrast, if they only reached 10 staff the mean score would be 4. 5 There is substantial variation in familiarity with research across Bank units and sectors of work. The Bank has a matrix structure whereby staff are mapped to both a VPU and sector/network. Table 3 gives summary statistics stratified by VPU. There is less variation in familiarity between regional VPUs than sectoral VPUs. Among the Bank’s regional VPUs, the lowest mean score for familiarity is for East Asia and the Pacific (EAP) while the highest is for the Middle-East and North Africa (MNA). Defining “high familiarity” as those with a self-rated score of 6 or more, we find that over two-thirds of respondents in MENA have high familiarity, as compared to less than 45% in EAP. Of course, not all of these differences are statistically significant; indeed, only for two VPUs—namely the Human Development Network (HDN) and PRM—are the differences in either the mean score or the proportion with high familiarity significantly different (at the 5% level) from the proportion for the World Bank Institute (WBI). (When relevant, all significance tests reported are based on robust standard errors.) However, the difference between sectoral VPUs is striking with only 36% of WBI staff rating their familiarity over 5, as compared to 81% in HDN and 71% in PRM. A potential source of bias is suggested by the fact that the two VPUs with the highest response rate also had the highest familiarity with WB research, namely HDN and PRM (comparing Tables 2 and 3). The overall correlation coefficient between response rate and mean familiarity across the 12 VPUs is 0.65. However, re-weighting the data to accord with the population proportions (the numbers of people invited across VPUs in Table 2) only reduces slightly the mean familiarity score, to 5.66 as compared to 5.74 for the un-weighted data. While there is clearly some sample selection bias, it appears to be modest. There may well be a bias within VPUs based on other unobserved factors, though this cannot be assessed. The rest of this paper will rely mainly on the un-weighted data. The differences in the extent to which different VPUs draw on the Bank’s research can be expected to reflect differences in their staffing. As noted, WB research is dominated by economic analysis. Some VPUs have hired more staff trained in research, as indicated by a PhD (in any subject), and some rely more on economists. Low demand for WB research might stem from high internal capacity for research in a VPU. Alternatively, internal capacity for research may enhance the unit’s absorptive capacity for learning and innovation (as postulated by Cohen 6 and Levinthal, 1990).10 Internal capacity reduces search costs and creates the kinds of overlaps in knowledge that facilitate the assimilation of new knowledge from external sources. The idea of absorptive capacity from the literature on industrial organization is consistent with what we see when we look at how the incidence of senior staff with a PhD and the share of economists vary across VPUs.11 On comparing the proportion of PhDs with the mean familiarity scores in Table 3 one finds a correlation coefficient of 0.66 while it is 0.54 for the % above 5; these are significant at the 1% and 2% levels respectively. There is an even higher correlation across VPUs between average familiarity with WB research and the share of staff that have “Economist” in their job title (Table 2). The correlation coefficients between the ratio of economists to senior staff (sent the survey) and the mean score for familiarity is 0.84, while the correlation with the % above 5 is 0.75; these are both significant at the 1% level. It appears that economists and staff with a PhD play a key role in the diffusion process for WB research throughout the institution. This is consistent with the role economists have played in the diffusion of economic policy ideas more generally (Kogut and Macpherson, 2011). An objective measure of demand for WB research is found in the intra-Bank financial flows designated as (paid) “cross-support.” Each of the research staff in DECRG is required to sell a minimum of 13 weeks of their time per year to other units, almost all of which are in operations, either regional units or network anchors.12 This arrangement exists in part to help assure that researchers are exposed to the problems and challenges faced in the Bank’s operational settings in the hope that this will generate relevant research, but also that operational staff will better appreciate the gains from a researcher’s perspective, such as in critically assessing prevailing policy views and practices. TTLs in operations typically come to the researchers with a specific task, such as supervising new data collection or undertaking analytic work for an analytic product, called “AAA,” which aim to inform policy discussions in specific countries. Table 2 gives the three-year mean cross-support provided by DECRG to each VPU. When deflated by the number of GG+ staff, the “cross-support per capita” is correlated with the mean familiarity scores across VPUs; the correlation coefficient is 0.53 (which is significant at the 1% level). Cross-support is also higher to VPUs with a higher share of PhDs (r=0.74) and more economists per capita (r=0.79). 7 The differences between the Bank’s sectors come out sharply in Table 4.13 Over 80% of responding staff in the Poverty unit of PRM rate themselves as highly familiar with WB research, with a mean score of almost 7. Yet less than one third of those in the Energy and Mining (EM) and Urban Development (URB) had high familiarity. There is a clear tendency for lower familiarity with WB research amongst the Bank’s traditional “infrastructure” sectors— agriculture and rural development, energy and mining, urban development, transport—than the newer “economic and human development sectors” (economic policy, poverty, education, health and nutrition, and social protection). The mean score for ARD, EM, TRN and URB is 5.19 (s.e.=0.24) as compared to 6.48 (0.14) for EP, EDU, HNP, POV and SP (and the difference is significant; t=4.66). The corresponding proportions of staff with a high familiarity score (over 5) are 44.58% (5.47%) and 70.68% (3.30%). In all, 41 staff in the “hard sectors” (ARD, EM, TRN and URB) gave a score of 6 or more, while 279 did so in the sample as a whole. Thus the hard sectors accounted for 15% of the staff with high familiarity. By contrast, these sectors accounted for 45% of Bank lending.14 Table 4 also gives my estimates of the allocation of DECRG’s staff (including managers) across sectors.15 Mean familiarity amongst operational staff is positively correlated with DECRG’s staffing, with a correlation coefficient of 0.527 (t=2.85; prob.=0.016). The positive correlation is consistent with the view that staffing of DECRG matters to familiarity with research amongst the Bank’s operational staff. The marked difference in familiarity with research seen in Table 4 match the allocation of staff in DECRG. The sectors ARD, EM, TRN and URB account for 14.9% of staff in the research department, as compared to 59.1% for EP, EDU, HNP, POV and SP. The table also gives shares of research publications across sectors. A total of 8,558 research publications were identified and 7,479 of them could be allocated across sectors.16 (Individual items were allocated to more than one sector, depending on their topic.) The correlation with the share of DECRG staff is high (r=0.839), as one would expect (though note that these are not only the publications of DECRG staff). However, the correlation of the publication shares with staff familiarity is lower, with r=0.387 (t=2.49; prob.=0.030). The survey also asked for a rating (also on a 10-point scale) of other sources of research besides the Bank.17 The two most important sources for Bank operational staff are likely to be 8 academia and external consultants. Tables 3 and 4 also give the mean scores by VPU and sector respectively. In both cases, the mean scores tend to be positively correlated with ratings of familiarity with the Bank’s internal research.18 This is also true across staff, with a positive correlation between ratings of familiarity with WB research and ratings of both academia and external consultants, though only significant for academia.19 It does not appear to be the case that the sectors and staff that know less about the Bank’s own research tend to value these external sources more highly. Why then do we see these differences in familiarity with WB research? It might be argued that they just reflect the nature of the work done by operational staff in different sectors. By this view, staff in the “hard infrastructure sectors” do not need the Bank’s research as much as in the “social sectors.” To paraphrase an argument I have heard: “We all know that roads are good for development and we don’t need economic analysis to confirm that; what we need is to make sure that the roads meet sound engineering standards.” It is hard to say what validity should be attached to this view. There are clearly differences in the nature of the knowledge needed by staff in different sectors. But the need for research is surely common between the “hard” and “soft” sectors. Economic analysis is important in formulating strategies and policies for infrastructure, notably in assessing the effectiveness of lending operations (both ex ante and ex post) and in devising sound pricing and investment policies. And the trade-offs invariably faced by governments between building roads and schools (say) must be informed by economic analysis. The uneven familiarity with WB research across sectors evident in Table 4 is not easily justified from the point of view of sound development policy and WB lending. It might be conjectured that the fact that economics is the dominant discipline in Bank research makes that research less relevant to some sectors. As discussed above, this does not seem credible as an explanation for why Bank research is used less in the traditional infrastructure sectors. Nor does it fit well with the fact that staff in Health, Nutrition and Population (HNP)—dominated by non-economists—are very familiar with research (Table 4). Possibly the idea has more salience for the Social Development (SDV) sector, which uses non- economics social sciences more than economics. (Note that SDV has the third lowest mean familiarity score in Table 4.) This division of labor is not, however, exogenous, but has evolved in a path-dependent way through hiring decisions. And it can change, albeit slowly. For example, 9 DECRG’s latest Policy Research Report (its flagship publication) is a substantial evaluative study, done by two economists, of the main lending instruments used by SDV operations (Mansuri and Rao, 2012). This report stemmed from demands from operations for more knowledge about the impacts of these interventions. To get an idea of how demand for research amongst operational staff is changing, the survey also asked: “To what extent do you currently rely on Bank research for your work?” and “To what extent do you expect to rely on Bank research for your work in the next few years?” Again, respondents were presented with a 10 point scale, this time from “not at all” (1) to “very much” (10). Figure 2 gives the density functions for responses. The responses on current reliance on WB research are strikingly bi-modal, with a “low-usage” mode around 3 and a high-usage mode around 7. The overall mean response is 5.40 and the median is 6, though this might be considered deceptive given the bi-modality. If we define the low-reliance and high-reliance groups as those with a score of 5 or lower versus 6+ this splits the sample almost equally, with 50.91% in the high group. Equally striking is the fact that the lower mode largely vanishes when one turns to the second question, about future reliance on WB research (Figure 2). This reflects a noticeably higher expected future reliance on WB research than for the past. This is consistent with the view that the WB is becoming a “Knowledge Bank.” But it might also be a psychological effect—a “New Year’s resolution” effect—whereby staff hope to learn more from research in the future. Those who have used WB research in the past are more likely to expect to use it in the future; the correlation coefficient is 0.87 (significant at better than the 0.1% level). (With controls for years of service, location, VPU and sector the correlation is 0.85, and similarly significant.) One possibility is that staff learn about how useful research can be, creating persistence; another possibility is that the correlation stems from time-invariant characteristics of respondents, whereby some are intrinsically more attracted to research. Figure 3 plots responses to the question on expected future reliance on WB research against that for past reliance. (In both cases, the scatter plot controls for respondent characteristics, namely years of service, location, VPU and sector.) While we see a strong positive relationship, the overall slope is less than unity. (The straight line in Figure 3 is the 45 10 degree line, while the other line is the non-parametric regression line of best fit.) Across individual staff, the regression coefficient of the expected increase in demand (future less current) on the current reliance on research is -0.175 (s.e.=0.027). This suggests that the disparities in usage of research can be expected to decline over time, interpretable as a process of convergence. It is of interest to look more closely at where this expected increase in demand for WB research is coming from. Tables 5 and 6 give the summary statistics across VPUs and sectors respectively. The VPU for which demand for research is expected to rise most on average is MNA while the lowest expected increase is for PRM. MNA is also the regional VPU with highest mean score for current reliance on research. The sector where demand is expected to increase most is EM while the lowest is SP. In both cases there is a tendency for the units that currently rely less on WB research to expect larger growth in usage in the future; the correlation coefficients between the expected increase and current levels are -0.32 and -0.22 across VPU and sectors respectively. However, it remains that expected future reliance on WB research is significantly lower for the sectors with low current familiarity and use. The mean score for future reliance on research for staff in the seven sectors with lowest mean familiarity scores in Table 4—namely ARD, EM, ENV, PSG, SDV, TRN and URB—is 5.55 (s.e.=0.21) as compared to 6.29 (s.e.=0.14) for all other staff and the difference is significant at the 0.3% level (t=2.96).20 Confining attention to the traditional “hard sectors” (ARD, EM, TRN and URB) their mean expected future reliance on research is 5.34 (0.28) as compared to 6.72 (0.16) for EP, EDU, HNP, POV and SP (and the difference is significant; t=4.31). Even with the signs of convergence we have seen, the hard sectors can be expected to persist in their relatively low reliance on research for their work. 4. Accessing research within the Bank How do the Bank’s practitioners obtain the research they use? In addition to papers and reports, there are learning programs, seminars, formal and informal contacts with researchers, and formal and informal networks. The survey data on self-assessed familiarity with Bank research analyzed 11 above offer a way of assessing how well these various channels are performing. This reveals some sources of friction in the diffusion process for research findings. The Bank has a dedicated formal expenditure item for “learning.” This is mainly staff time devoted to attending various internal courses and other learning events. Tabulations are available of the expenditures devoted to learning by VPU. I find that the expenditure per staff member in 2010-11 is uncorrelated with familiarity scores across VPUs; the correlation coefficients are 0.01 and 0.11 for the mean score and the proportion of 5+ scores.21 So this does not appear to be an important channel for learning about the findings from WB research. The survey asked the following question that gets at a broader range of channels for learning: “When obtaining research from the World Bank, how frequently do you get the research from:…” Respondents were then presented with a list of options and a 10-point scale. Table 7 summarizes survey responses. The most highly-rated sources of WB research are “reports or papers,” the Bank’s intranet, flagship reports, and the Bank’s main working paper series. Seminars etc., and authored reports are close behind. The lowest rated source (on average) is “DECRG researcher cross-support.” Arguably a better way of thinking about the relative importance of these various sources of knowledge is to ask how well they predict the respondents’ own assessments of their familiarity with WB research. Staff may use source A more frequently than B (hence with a higher score for A in the survey response) yet B has the stronger impact on familiarity with research. For example, when a researcher provides cross-support to a TTL in operations, he or she will probably get a deeper understanding and appreciation of the contributions of research to operational work than when reading a final research product that is less distantly related to the needs of that person. Then the cross-support has greater impact. To explore this alternative interpretation, Table 8 gives a regression of the familiarity scores on the scores for sources of knowledge. I include controls for location, sector and VPU; filtering out these sources of heterogeneity helps in identifying the relationship in the data. In explaining the differences in overall familiarity with WB research, the use of DECRG researchers in cross-support now emerges as the most important factor, with both the highest and the most significant regression coefficient. Also important are informal discussions with 12 researchers and journal articles, though the latter variable is only significant at the 12% level. (The results were similar if one dropped the controls for years of service and VPU/sector.) The direct (formal and informal) interactions with DECRG researchers (including cross-support) have high weights in affecting familiarity with WB research, even though they score relatively poorly in the mean ratings for frequency of their use given in Table 7. Similarly to other large organizations, the Bank has created official “networks.” (Recall that these map 1-to-1 with “sectors” in the Bank’s matrix structure.) The networks are intended to play a key role in linking sources of knowledge (including WB research) with practitioners within the Bank. How well do they serve this function? The survey asked: “To what extent does the network that you belong to help you navigate the World Bank’s body of research and researchers when you request information?” Again, respondents were asked to answer on a 10 point scale from 1=“Not at all” to 10=“Very much.” The mean response was 4.86 and the median was 5. The modal value was 1; 15% of respondents gave this answer. The differences across sectors can be seen in Table 9. The most highly rated sector is education, for which 70% of those mapped to the sector rated its performance as a 6 or higher, as compared to 42% for all staff. The next highest in terms of mean score is HNP, with POV coming third; the sector with the lowest mean score is ENV, followed by SDV. Having a network that performs poorly adds friction to the diffusion process for Bank research. This is evident in the fact that the performance ratings of the networks are positively correlated with familiarity with research. This is clear from comparing the sector means in Tables 4 and 9; the correlation coefficient between mean scores is 0.508. Across all respondents the correlation coefficient is 0.230 (which is statistically significant at the 0.1% level). The in-house research department (DECRG) is the key internal unit aiming to supply relevant research findings to Bank operations, but there are differing views about its success in this role. The survey asked: “Which of the following statements best represents your views about research at the Bank: (1) I would find it more valuable and useful if the institution were to out-source more of its research; (2) I would find it more valuable and useful if the institution committed more resources to research within an in-house research department.” 13 Over the whole sample, 63.0% supported option (2), for more resources for in-house research. The VPU with the lowest support was ECA, with 56.6% support, and, the highest was WBI at 85.7%. However, excluding WBI, the inter-VPU differences are not statistically significant at the 5% level. Across sectors, the lowest support for in-house research is again EM, with only 40.0% supportive, while the highest was the EP sector, for which 82.2% of respondents preferred option (2), followed closely by POV (81.5%). Those who are more familiar with and rely more on WB research tend to be more supportive of higher resources being devoted to in-house research. With the same set of controls, the correlation coefficient between support for the in-house research department and current reliance on research is 0.40. A similar pattern is found for familiarity with research, though the correlation coefficient was lower at r=0.23. An even stronger correlation is found between support for in-house research and expected future reliance on WB research (r=0.49). Table 10 gives the Probit regression coefficients for the probability of supporting the in- house research department (i.e., picking (2) above). Controlling for years, location, VPU and sector, support for greater resources for the research department is greater among those who rate highly the technical quality and policy relevance of WB research and those who think it helps with AAA. It also tends to be greater for those who have a less favorable view of external consultants. The direct responsiveness of the research department to operational demands is a potentially important channel for knowledge diffusion in an organization such as the World Bank, but how well does it work? The survey asked two questions about the experiences of staff in working with DEC, namely: “To what extent are researchers at DEC responsive to your needs?” and “To what extent are DEC researchers responsive and available for cross support?” Again, a 10-point scale was used from 1=“Not at all” to 10=“To a very high degree.” For the first question there was also an option for “Depends on the researcher; some are responsive, some are not.” Since cross support is mandatory for DECRG researchers, the degree of “responsiveness” will undoubtedly depend in part on whether the researcher in question is above or below the minimum required (set at 13 weeks per year). So there is likely to be some random variability depending on when the researchers are approached by operations. Figure 4 gives the density functions for scores 1-10. Again we see signs of bi-modality, with a larger sub-group of 14 staff centered around an answer of 7-8 and a smaller mass centered around 2-3. The mean and median responses to the first question on a 10-point scale were 5.80 and 6 respectively with n=241 (there is a large number of missing values for both these questions). 12.9% of respondents gave the answer “Depends on the researcher.” The mean and median for the second were 5.47 and 6 respectively (n=254). Tables 11 and 12 give the breakdown of these indicators across VPUs and sectors. The proportion rating DEC researchers as 5+ on their responsiveness varies from 50% in OPC to 100% in WBI. The proportion with 5+ for availability for cross-support varies from 36% in South Asia to 79% in ECA. The differences across sectors in assessments of DEC responsiveness are particularly striking, with only 20% for staff in SDV rating responsiveness as high, compared to 90% of staff in POV, with EP close behind at 86%. The survey asked about other factors relevant to demand for research, notably its relevance, accessibility and timeliness. (Each category was given an explanation in the survey instrument.22) And again a 10 point scale was used. Figure 5 gives the densities. A marked bimodality is evident for “relevance” and (less so) “accessibility” and the answers to these questions were highly correlated (r=0.74). “Timeliness” is distributed somewhat differently though still correlated with the other measures (r=0.72 for both relevance and access). The mean score on relevance was 5.80 (n=445), 6.12 (n=449) on access and 5.25 (n=383) for timeliness, which appears to be the more important constraint on demand for research. Research takes time, and often requires a longer time horizon than development projects allow. 5. The value of research in Bank operations The incentive to learn about the Bank’s research is greatest when it is perceived to be of high value to operational staff in their work. The survey asked: “please rate the overall value of World Bank research for your work, on a scale of 1-10 where 1 means not valuable at all, 10 means extremely valuable.” And again respondents were presented with a 10 point scale where 1=”not valuable at all” and 10=”extremely valuable.” Figure 6 gives the densities of responses to the “value” question. We see a marked bi-modality, with a high value mode around 7-8 and a low value mode around 3. The mean score is 5.61 and the median is 6. The density function in 15 Figure 6 suggests that a score of 5 or more is reasonable in defining the “high-value” group. We find that 65.5% of respondents attached high value to WB research. Selective response is clearly inflating these numbers, though the effect appears to be modest when one corrects for the variation in response rates across VPUs. This adjustment brings the mean assessed value down to 5.54 from 5.61. While this is small, there may well be a larger bias stemming from latent differences between staff within a given VPU. There are marked differences in the value attached to research across VPUs and sectors. Tables 13 and 14 give the summary statistics. The VPU that attaches the lowest average value to WB research is SDN, with a mean score of 5, and 55% of respondents giving a rating of 5 or more. The VPU with the highest value attached to WB research is WBI, with an average score of 7, and all the WBI respondents rated WB research highly. The next highest is PRM, with a mean score of 6.5. The sectoral VPUs generally put higher value on research than the regional VPUs. The regional VPU with the highest mean score is (again) MNA. The traditional “hard infrastructure” sectors tend to put lower average value on research for their work, suggesting lower incentives to learn about WB research. The mean score for ARD, EM, TRN and URB is 4.82 (0.41) as compared to 6.27 (0.27) for EP, EDU, HNP, POV and SP; the difference is significant (t=2.94). The sector for which staff attach the lowest value to WB research for their work is EM (a mean score of 4.1, with only 41% rating it highly), followed by TRN, URB and ARD. The sector with the highest mean value is the poverty sector within PRM, with a mean score of 7 and for which 93% of respondents rated WB research highly in terms of its value for their work. How then are staff distributed across the four categories identified in the introduction (Table 1)? Table 15 gives the breakdown for the 519 responses with complete information.23 We see that amongst those respondents who feel that WB research is important for their work roughly two-thirds have “high” familiarity with WB research products. The “functionally well- informed” make up 42% of the sample. The independently well-informed are a relatively small group. Amongst those with low perceived familiarity with research, slightly less than half put low value on research for their work, while the rest (one quarter of all respondents) think research is valuable, and are clearly are not getting enough of it. 16 The “frustrated uninformed” can be thought of as a target group for extra effort at research dissemination. So it is of interest to see where they are found. Tables 16 and 17 give the VPU and sectoral distributions of the four types of staff. WBI has (by far) the highest share of the “frustrated uninformed” staff, with 65% in this category, the rest being “functionally well- informed”—both valuing research for their work and familiar with it, who comprise 42% of respondents. The MNA region has the lowest, as 13%. Breaking it down by sectors, it is transport staff that have the highest share of “frustrated uninformed” group, at 35%, with URB and SDV close behind. EDU has the lowest, at 9%. The share of “functionally well-informed” varies from 35% in SDN to 63% in PRM, while amongst sectors it varies from 14% in energy and mining to 83% in the poverty sector. A closer look at the “voluntarily uninformed” does not suggest that they have simply switched to external (non-WB) sources of research. They give a relatively low rating to academia as a source—a mean score of 5.53, as compared to 6.87 for all other staff, and the difference is significant at the 0.1% level. They also rate consultants lower than average, at a mean of 6.07 as compared to 6.54 for the rest, and the difference is significant at the 5% level. At the same time they have an above average desire to increase their use of the Bank’s research; their mean difference between expected future usage and past usage (both on the 10-point scale) is 0.80 versus 0.54 for the rest of the staff and 0.43 for the “functionally well-informed.” (The former difference is only significant at the 13% level, but the latter makes it at 3%.) 6. Does an incentive to learn translate into knowledge? The answer to this question is far from obvious. Attaching a high value to research for one’s work need not translate into actual knowledge about research. That link will be broken if the Bank’s internal research organization is unresponsive, or its products inaccessible. The survey evidence suggests that stronger incentives for learning do generally translate into higher investment by staff in learning, as reflected in the familiarity with research. Those respondents who put a higher value on research for their work are more likely to be familiar with it, as can be seen in Figure 7 (with the same set of controls used earlier). The (non-parametric) regression line shows a persistent increase over the whole range of familiarity scores. If we focus solely on those respondents who believe they have a high familiarity with WB research—again a 17 score of 6 or more—then the mean assessed value is 6.53 (s.e.=0.14), as compared to 4.49 (0.14) for those who acknowledge they are not very familiar with research; the difference is highly significant (t=10.22). Amongst those familiar with research, 69.2% rated it of high overall value (s.e.=2.8%), compared to 35.4% (3.1%) for those relatively unfamiliar. How much does familiarity with research respond to stronger incentives to learn? The regression coefficient of the familiarity score on the value score is 0.45 (with a standard error of 0.036). Going from a 1 to a 10 on the perceived value of research for one’s work translates on average into a mean increment of four levels for familiarity. Table 18 gives regressions for familiarity scores with controls. The first specification uses only the controls for years of experience, its squared value, location, sector and VPU, while the second includes the assessed value of research for the respondent’s work, as the measure of incentives for learning. (I return to the third.) Allowing for differing incentives for learning more than doubles the share of the variance in familiarity with research that is explained by the controls alone. With the controls, the regression coefficient on the incentive variable falls slightly, to 0.421. A one standard deviation increase in the value attached to research adds about one half of a standard deviation to the familiarity score.24 About half of the gap between the mean familiarity scores for ARD, EM, TRN and URB (on the one hand) and EP, EDU, HNP, POV and SP (on the other) is accounted for by the difference in the value attached to research by staff working in these sectors.25 In interpreting these findings, one can view the glass as half empty or half full. The fact that knowledge responds to incentives for learning suggests that the internal system for research and its diffusion is working to create internal absorptive capacity. But the fact that the regression coefficient is significantly less than unity is also suggestive of frictions in the process. (Figure 7 also shows the 45 degree line.) It may well be that the responses to the “value” question are picking up a latent characteristic of respondents whereby some operational staff are just naturally “research consumers”—they are more interested in research and seek it out more avidly even when their incentive for learning (as part of their job) is low. This would cast doubt on the interpretation of the “value” question as indicating the respondent’s incentive for learning about Bank research. 18 The survey also asked respondents to rate academia as a source of research relevant to their work. This is likely to pick up the “research consumer” attribute. So too is the respondent’s rating of “informal discussions with researchers” as a source of knowledge about research. Column (3) of Table 18 gives the regression with these two extra controls. Both are significant, and the coefficient on value falls to 0.30, but it is still significantly positive and significantly less than unity, again suggestive of frictions. Separately to these effects, familiarity with WB research also varies with years of service and this effect is markedly nonlinear. Familiarity rises with years of service up to 16 years (18 years using the specification in Column 3) and falls after that. The median years of service is 10; 20% of respondents had more than 16 years of service. There is also a difference in familiarity with WB research between those based at the Bank’s head-quarters in Washington DC (where the research department is located) and those in the field offices in developing countries; 65% of respondents were in the former group, versus 34% in field offices (the remaining 1% were in WB offices in developed countries, mainly Europe). The fact that familiarity is lower for those in country offices suggests that the advances in communication technology—the WB has a well-developed “intranet” to facilitate this—have not eliminated the advantages of physical proximity. It is of interest to look for other evidence for or against the view that stronger incentives for learning translate into knowledge. An objective clue to how much the Bank’s internal system for knowledge management responds to differences in the incentive for learning can be found in how the demand for paid cross-support from the Bank’s research department varies with answers to the value question. Across VPUs, the mean score for the value of WB research is highly correlated with the per capita demand for cross-support from DECRG (using the data in Tables 1 and 12); the correlation coefficients between the three-year mean of cross-support per staff member GG+ and the mean value score is 0.72, and it is also 0.72 with the proportion of VPU staff giving a value core over 5. This is consistent with the view that incentives for learning in the Bank do generate a response. 19 7. Conclusions One can think of two stylized models of how research is used by development practitioners. In the first, they have a demand for knowledge that does not stem from its direct bearing on their work. Much development research is a public good. Practitioners might read research findings to better understand the world in which they work, even when that understanding is essentially irrelevant to the specifics of their work. Alternatively, in the second model, research has a direct value in the work of practitioners—such as by informing project choices at the entry stage and assessing impacts later on—and research findings are sufficiently relevant and accessible to entail that practitioners become well informed. To assure that an international aid agency such as the World Bank provides high quality development aid, we hope that the second model is the best characterization. But is it? If it was true in general that research findings are largely consumed independently of their direct bearing on the work of development practitioners then we would not expect greater familiarity with World Bank research to be associated with a higher perceived value of that research for the practitioners’ work, or higher stated reliance on research for that work. Indeed, given that there are heterogeneous costs of learning, those most familiar with research findings need not overlap much at all with those who value research most for their work. But that is not what we see in these survey data. Two-thirds of the responding senior staff put a high value on the Bank’s research for their work. They appear to have a reasonably strong incentive to learn, though stronger for some than others. And operational staff with a stronger incentive to learn tend to be more familiar with Bank research. Only 12% of staff fall into the category of “independently well-informed”—those who report high familiarity with research, but attach low value to it for their work. The Bank’s senior staff members are mainly found in three other groups. The first are those who are “voluntarily uninformed” about research—they attach low value to it for their own work, and report correspondingly low familiarity with research; 23% of senior staff fall into this group. A second group puts a high value on Bank research but its members are not generally successful in accessing that research or find it of limited relevance. This group is not evenly spread across Bank units and sectors, suggesting that there may be scope for a targeted effort at research and its dissemination. The third and largest group is the “functionally well-informed,” comprising 42% of staff. They put a high value on Bank research 20 for their work (a mean score of almost 8 on a scale of 1-10) and are (by their own assessment) quite familiar with research findings (also a mean score of 8 on a scale of 1-10). While this paper’s results are consistent with the view that stronger incentives for leaning about research translate into greater knowledge about research findings within the World Bank, they are also suggestive of some significant frictions. The slope of the relationship between perceived value and familiarity with Bank research—the effect of an increment to perceived value on familiarity, both on a 1-10 scale—is positive but significantly less than unity, suggesting frictions in how the incentive for learning translates into knowledge. The responsiveness of researchers and the timeliness and accessibility of their outputs are clearly important to how much learning incentives lead to useful knowledge. There are some marked differences across Bank units. Staff working in the economic policy and human development sectors tend to be both more familiar with the Bank’s research and attach higher value to it than do others, notably in the more traditional areas of the Bank’s lending, such as agriculture, energy and mining, transport and urban development, which account for just under half of Bank lending. For example, while only 7% of senior staff members working on poverty and 10% of those mapped to economic policy are “voluntarily uninformed,” the proportion rises to 37% for staff working on urban development and 41% of staff in the energy and mining sector. The differences across units in the demand for the Bank’s research are correlated with the incidence of PhDs and economists, suggesting that internal research capacity in operational units helps create absorptive capacity for knowledge in those units. The Bank’s more formal networks also play a role in the research diffusion process, especially in the economic policy, poverty and human development sectors; however, staff working in the traditional sectors tend to give lower ratings to their networks in their performance in helping them connect to research. What gives rise to these inter-sectoral differences? Both demand- and supply-side factors have clearly played a role. Staff working in the traditional sectors may well have turned away from Bank research in part because they have found it to be of little relevance to their needs, though they also put relatively low value on research sources outside the Bank. Today’s research priorities may well be poorly matched with the issues faced by practitioners in these sectors. For 21 example, the current emphasis on randomized trials in development economics has arguably distorted knowledge even further away from the hard infrastructure sectors where these tools have less applicability (Ravallion, 2009). Making the supply of research more relevant to the needs of development practitioners would undoubtedly help. However, this is an incomplete explanation, since the supply of research is clearly also determined by demand. In turn, demand stems in no small measure from the extent to which “development impact” is challenged by donors. Impact often appears to be taken for granted in the traditional hard infrastructure sectors (though in truth the evidence is often rather weak, given relatively low levels of investment in ex-post evaluations). This stands in marked contrast to the social sectors where lending and policy operations have had to work hard to justify themselves, and have drawn more heavily on research to do that; the large body of research on poverty and human development that emerged in the last 15 years is indicative of this longer-term shift in priorities. Clearly, if the presumption of “impact” is routinely challenged by donors, aid organizations and citizens then project staff will face strong incentives for learning about impact. And the results of this paper suggest that (at least for the World Bank) stronger incentives for learning will yield greater familiarity and use of evaluative research. 22 References Burnside, C. and Dollar D., 2000. Aid, policies, and growth. American economic review, 90(4), 847-868. Clemens, M.A., Radelet, S., Bhavnani R.R. and Bazzi, S., 2012. Counting chickens when they hatch: Timing and the effects of aid on growth. Economic journal 122, 590-617. Cohen W. M. and Levinthal D. A., 1990. Absorptive capacity: A new perspective on learning and innovation. Administrative science quarterly, 35(1), 128-152. Deininger, K., Squire L., and Basu, S., 1998. Does economic analysis improve the quality of foreign assistance? World bank economic review, 12(3), 385-418. Denizer, C., Kaufmann D., and Kraay A., 2011. Good countries or good projects? Macro and micro correlates of World Bank project performance, Policy Research Working Paper 5646, Washington DC: World Bank. Fardoust S. and Flanagan A.E., 2011. Quality of knowledge and quality financial assistance: A quantitative assessment in the case of the World Bank, mimeo, Development Economics, World Bank, Washington DC. Hage, J.T., 1999. Organizational innovation and organization change. Annual review of sociology, 25, 597-622. Hansen, H. and Tarp F., 2001. Aid and growth regressions. Journal of development economics, 62(4), 547-70. Kogut, B. and Macpherson J. M., 2011. The mobility of economists and the diffusion of policy ideas: The influence of economics on national policies. Research policy 40, 1307-1320. Mansuri, G. and Rao, V., 2012. Localizing development: Does participation work? Policy research report, World Bank, Washington DC. Ravallion, M., 2009. Should the randomistas rule? Economists’ voice, 6(2), 1-5. ___________ 2011. Development impact calls for knowledgeable development practitioners, Development Impact Blog, World Bank, Washinton DC. Ravallion, M. and Wagstaff A., 2012. The World Bank's publication record. Review of International Organizations, forthcoming. Temple, J., 2010. Aid and conditionality, in Handbook of development economics Volume 5, Rodrik. D., and Rosenzweig, M. (eds). Amsterdam: North-Holland. 23 Wane, W., 2004. The quality of foreign aid: Country selectivity or donor incentives? Policy Research Working Paper 3325, Washington DC: World Bank. Wapenhans, W., 1992. Effective implementation: Key to development impact. Report of the World Bank’s Portfolio Management Task Force: Wahsington DC: World Bank. World Bank, 2011. Annual Report, Washington DC: World Bank. 24 Figure 1: Density functions for self-rated familiarity with World Bank research .20 Familiarity with WB research Familiarity with controls .16 .12 Density .08 .04 .00 0 1 2 3 4 5 6 7 8 9 10 11 Figure 2: Density functions for current and expected future reliance on WB research .20 Currently rely on WB research Expect to rely on WB reserach in future .16 .12 Density .08 .04 .00 0 1 2 3 4 5 6 7 8 9 10 11 25 Figure 3: Those who have relied on WB research before are more likely to expect to return Expected future reliance on WB research (with controls) 12 10 8 6 4 2 0 -2 0 1 2 3 4 5 6 7 8 9 10 11 12 Past reliance of WB research (with controls) Figure 4: Densities for DEC responsiveness and availability for cross-support .16 Responsivess of DEC researchers .14 Availability for cross-support .12 .10 Density .08 .06 .04 .02 .00 0 1 2 3 4 5 6 7 8 9 10 11 26 Figure 5: Densities for relevance, accessibility and timeliness of WB research .20 Relevance of WB research Accessibility of WB research .16 Timeliness of WB research .12 Density .08 .04 .00 0 1 2 3 4 5 6 7 8 9 10 11 Figure 6: Densities of perceived value of WB research to respondent’s work .16 Perceived value of WB research to own work Perceived value with controls .14 .12 .10 Density .08 .06 .04 .02 .00 0 1 2 3 4 5 6 7 8 9 10 11 27 Figure 7: The more valuable research is to the staff member’s work the more familiar they are with research 12 Familiarity with research (with controls) 10 8 r=0.49 6 4 2 0 0 1 2 3 4 5 6 7 8 9 10 Perceived value of research to own work (with controls) 28 Table 1: Classification of development practitioners Incentive to learn: Perceived value of research for own work: Low High “Voluntarily uninformed:” “Frustrated uninformed:” Knowledge: Do not feel the need for research Need to know more from research Staff member’s Low and do not have a general interest but cannot access or finds current personal in learning from new research. research of little practical use. familiarity with “Independently well-informed:” “Functionally well-informed:” research findings Do not need research for own Research is an important input to the High work, but has a general interest in staff member’s work and access to learning about development. relevant research is not a problem. 29 Table 2: Number of staff invited to participate and response rate by Vice-Presidencies Vice-Presidency No. staff Response % staff No. staff with Annual Annual invited rate (%) GG+ “Economist” cross-support cross- (all staff with in job title in by research support levels PhD staff department per capita GG+) directory $’000 of staff (3-year GG+ mean) ($/person/ year) Regional VPUs Sub-Saharan Africa (AFR) 651 15.8 46.9 57 764 1174 East Asia and Pacific (EAP) 349 18.6 44.5 31 460 1318 Eastern Europe and Central Asia 43 (ECA) 337 19.0 49.4 303 900 Latin America and Caribbean 42 (LCR) 309 24.6 55.0 553 1790 Middle East and North Africa 25 (MNA) 203 12.3 48.4 281 1383 South Asia (SAR) 331 15.7 46.8 22 429 1296 “Sectoral” VPUs Finance and Private Sector (FPD) 114 16.7 38.0 8 189 1661 Human Development (HDN) 77 31.2 61.5 38 111 1437 Operational Policy (OPC) 83 21.7 49.7 10 32 386 Poverty Reduction and Economic 56 Management (PRM) 76 50.0 75.4 332 4368 Sustainable Development (SDN) 293 14.3 42.3 11 374 1278 World Bank Institute (WBI) 75 17.3 54.9 11 81 1084 Development Economics (DEC) 0 n.a. 77.2 110 n.a. n.a. Total 2898 19.2 40.6* 945* 4193 1447 Note: VPU is missing for some observations. These are included in the total count. * Includes all units not identified above. 30 Table 3: Summary statistics on familiarity with WB research across Vice-Presidencies Vice-Presidency Average familiarity High familiarity with Rating of academia as Rating of external with research (10 point research (% above 5 on source of research consultants as source scale) scale) globally of research globally Mean St. error Mean St. error Mean St. error Mean St. error Regional VPUs Sub-Saharan Africa 5.67 0.21 52.13 5.21 6.19 0.25 6.31 0.23 East Asia and Pacific 5.31 0.27 44.62 6.24 6.62 0.28 6.32 0.26 Eastern Europe and Central Asia 5.41 0.29 47.46 6.58 6.38 0.29 6.87 0.22 Latin America and Caribbean 5.34 0.24 49.32 5.92 6.84 0.26 6.62 0.24 Middle East and North Africa 6.13 0.44 66.67 9.74 6.81 0.41 6.71 0.29 South Asia 5.88 0.30 60.00 7.01 6.22 0.31 6.00 0.32 “Sectoral” VPUs Finance and Private Sector 5.50 0.56 50.00 11.92 6.80 0.62 6.80 0.60 Human Development 7.33 0.45 80.95 8.67 8.10 0.42 7.00 0.54 Operational Policy 5.41 0.58 47.06 12.25 7.38 0.45 6.62 0.48 Poverty Reduction and Econ. Mgt. 6.89 0.42 71.43 7.73 6.97 0.38 6.25 0.44 Sustainable Development 5.63 0.42 50.00 8.00 6.17 0.37 6.30 0.35 World Bank Institute 5.18 0.88 36.36 14.68 6.29 0.44 6.43 0.79 Total 5.74 0.10 53.76 2.19 6.59 0.10 6.44 0.10 Note: the totals include some other VPUs with insufficient sample sizes. Table 4: Summary statistics on familiarity with WB research across sectors Sector DECRG Share of Average familiarity High familiarity with Rating of academia Rating of external staff research with research (10 research (% above 5 as source of research consultants as source allocation pubs. point scale) on scale globally of research globally (%) (%) Mean St. error Mean St. error Mean St. error Mean St. error Agriculture and Rural Development (ARD) 9.01 2.85 5.47 0.31 50.00 8.22 6.61 0.33 6.57 0.35 Economic Policy (EP) 29.73 27.74 6.50 0.19 67.29 4.59 6.81 0.21 6.26 0.20 Education (EDU) 3.60 2.23 6.46 0.42 68.57 7.95 7.52 0.35 7.03 0.37 Energy and Mining (EM) 4.05 3.89 4.59 0.52 31.82 10.06 5.15 0.41 6.35 0.42 Environment (ENV) 4.95 4.91 5.32 0.48 45.45 10.75 6.75 0.44 6.73 0.41 Finance and Private Sector (FPD) 11.71 20.03 5.70 0.36 50.00 8.01 6.46 0.40 6.45 0.35 Health, Nutrition and Population (HNP) 4.05 6.52 5.92 0.36 61.11 8.23 7.42 0.30 6.61 0.31 Poverty (POV) 15.32 6.45 6.97 0.30 83.33 6.89 7.19 0.32 6.29 0.38 Public Sector Governance (PSG) 6.31 8.29 5.28 0.44 37.50 8.67 6.04 0.41 5.93 0.43 Social Development (SDV) 3.15 0.59 5.23 0.31 38.46 13.67 6.73 0.29 5.80 0.43 Social Protection (SP) 6.31 10.58 6.71 0.57 64.71 11.74 7.40 0.48 7.80 0.36 Transport (TRN) 0.45 1.69 5.45 0.66 45.00 11.27 6.65 0.60 6.29 0.39 Urban Development (URB) 1.35 4.24 5.05 0.58 31.58 10.80 6.24 0.52 7.24 0.52 Total 100.00 100.00 5.74 0.10 53.76 2.19 6.59 0.10 6.44 0.10 Note: the totals include some other sectors with insufficient sample sizes. Sectoral VPU in parentheses (Table 2), though note that sector staff are also in regional units. 32 Table 5: Current and expected future demand for WB research across Vice-Presidencies Vice-Presidency Current reliance on WB Expected increase in demand research (10 point scale) (score for expected future demand – score for current) Mean St. error Mean St. error Regional VPUs Sub-Saharan Africa 5.28 0.28 0.55 0.18 East Asia and Pacific 4.98 0.33 0.51 0.15 Eastern Europe and Central Asia 5.45 0.34 0.48 0.14 Latin America and Caribbean 5.18 0.33 0.71 0.18 Middle East and North Africa 6.05 0.48 0.95 0.26 South Asia 5.35 0.39 0.80 0.15 “Sectoral” VPUs Finance and Private Sector 4.87 0.76 0.43 0.28 Human Development 6.38 0.54 0.43 0.27 Operational Policy 4.46 0.73 0.92 0.64 Poverty Reduction and Econ. Mgt. 6.45 0.45 0.37 0.16 Sustainable Development 5.19 0.43 0.63 0.25 World Bank Institute 5.57 0.79 0.43 0.35 Total 5.40 0.12 0.59 0.06 Table 6: Current and expected future demand for WB research across sectors Sector Current reliance on WB Expected increase in demand research (10 point scale) (score for expected future demand – score for current) Mean St. error Mean St. error Agriculture and Rural Development 5.43 0.55 0.40 0.16 Economic Policy 6.18 0.26 0.72 0.14 Education 5.58 0.46 0.61 0.23 Energy and Mining 3.55 0.38 0.74 0.21 Environment 5.82 0.69 0.35 0.22 Finance and Private Sector 4.94 0.41 0.58 0.14 Health, Nutrition and Population 5.64 0.41 0.33 0.13 Poverty 6.96 0.34 0.67 0.24 Public Sector Governance 5.31 0.44 0.50 0.33 Social Development 4.91 0.53 0.64 0.33 Social Protection 5.87 0.62 0.20 0.17 Transport 4.67 0.68 0.65 0.27 Urban Development 5.12 0.48 0.29 0.45 Total 5.40 0.12 0.59 0.06 34 Table 7: Sources of knowledge on WB research Mean Median Standard deviation Informal discussion with researcher(s) 4.50 5 2.72 Hiring researchers (consultants) 4.96 5 2.70 Report(s) or paper(s) 6.68 7 2.27 DECRG researcher cross-support 3.69 3 2.56 Other researcher from Network or other part of WB 5.02 5 2.54 Product available on the intranet 6.52 7 2.44 Seminar, workshop or presentation 5.85 6 2.36 Authored report(s) 5.87 6 2.55 Flagship report such as WDR, PRR etc 6.53 7 2.49 Policy Research Working Paper series 6.37 7 2.45 Journal article authored by Bank staff 4.60 4 2.70 35 Table 8: Implicit weights on various sources of WB research in predicting familiarity Coefficient t-Statistic Intercept 2.922 4.685 Source of WB research: Informal discussion with researcher(s) 0.118 2.695 Hiring researchers (consultants) 0.051 1.148 DECRG researcher (cross-support) 0.163 3.285 Intranet -0.016 -0.294 Seminars, workshop or presentation 0.036 0.679 Authored report(s) -0.070 -1.252 Flagship report such as WDR, PRR 0.085 1.649 Policy Research Working Paper 0.046 0.811 Journal article by WB staff 0.103 1.546 N 398 R2 0.366 Adjusted R2 0.301 S.E. of regression 1.826 Mean dependent var 5.954 S.D. dependent var 2.184 F-statistic 5.610 Prob(F-statistic) 0.000 Note: Regression included controls for years of service, sector and VPU. 36 Table 9: Staff assessments of the performance of networks in connecting to WB research Sector Average rating (10 point High rating (% above 5 scale) on scale Mean St. error Mean St. error Agriculture and Rural Development 4.59 0.46 24.14 8.08 Economic Policy 4.93 0.28 48.84 5.48 Education 6.33 0.51 70.00 8.50 Energy and Mining 4.78 0.65 38.89 11.68 Environment 4.00 0.66 33.33 12.37 Finance and Private Sector 4.26 0.48 35.29 8.33 Health, Nutrition and Population 5.63 0.46 53.33 9.26 Poverty 5.48 0.47 48.15 9.77 Public Sector Governance 5.16 0.55 44.00 10.09 Social Development 4.10 0.78 30.00 14.73 Social Protection 5.29 0.73 57.14 13.44 Transport 4.71 0.74 29.41 11.23 Urban Development 5.38 0.60 43.75 12.60 Total 4.86 0.13 41.71 2.44 Note: the totals include some other sectors with insufficient sample sizes. Sectoral VPU in parentheses (Table 2), though note that sector staff are also in regional units. 37 Table 10: Probit regressions for support for more resources for in-house research department Coefficient z-statistic Coefficient z-statistic Intercept -1.761 -3.133 -1.430 -3.685 Familiarity with WB research 0.160 3.490 0.147 3.413 Technical quality is high 0.109 2.221 0.077 1.619 Improves AAA effort 0.137 2.242 0.182 3.221 Improves quality of entry in projects 0.103 2.076 0.063 1.495 External consultants are useful for your work -0.101 -2.104 -0.092 -2.134 It is not difficult to find the research I need -0.095 -2.207 -0.088 -2.199 Controls for years of service, years squared, Yes No location, VPU and sector N 322 325 Pseudo R2 0.228 0.210 S.E. of regression 0.424 0.424 Mean dependent var 0.618 0.615 S.D. dependent var 0.487 0.487 LR statistic 114.809 91.134 Prob(LR-statistic) 0.000 0.000 Note: Z-statistics based on White standard errors. 38 Table 11: Summary statistics on perceptions of DEC responsiveness across VPUs Vice-Presidency Responsiveness of DEC Availability of DEC researchers researchers for cross-support (% rating 5+) (% rating 5+) Mean St. error. Mean St. error. Regional VPUs Sub-Saharan Africa 68.75 6.86 61.82 6.71 East Asia and Pacific 73.33 8.28 76.67 7.91 Eastern Europe and Central Asia 71.43 8.76 78.79 7.29 Latin America and Caribbean 66.67 9.31 73.08 8.91 Middle East and North Africa 71.43 12.39 76.47 10.54 South Asia 57.14 11.08 36.36 10.51 “Sectoral” VPUs Finance and Private Sector 77.78 14.22 70.00 14.85 Human Development 78.57 11.25 66.67 13.94 Operational Policy 50.00 20.94 50.00 20.91 Poverty Reduction and Econ. Mgt. 86.36 7.51 76.47 10.54 Sustainable Development 61.54 13.84 50.00 12.81 World Bank Institute 100.00 0.00 75.00 22.18 Total 70.95 2.93 66.93 2.96 Note: the totals include some other VPUs with insufficient sample sizes. 39 Table 12: Summary statistics on perceptions of DEC responsiveness across sectors Sector Responsiveness of DEC Availability of DEC researchers researchers for cross- (% rating 5+) support (% rating 5+) Mean St. error. Mean St. error. Agriculture and Rural Development 70.59 11.36 75.00 9.94 Economic Policy 85.96 4.73 79.69 5.16 Education 60.00 11.26 52.63 11.76 Energy and Mining 37.50 17.60 50.00 16.23 Environment 75.00 15.74 75.00 15.72 Finance and Private Sector 63.64 10.55 58.33 10.33 Health, Nutrition and Population 80.00 9.20 72.22 10.84 Poverty 90.48 6.59 85.71 7.84 Public Sector Governance 81.82 11.96 66.67 13.97 Social Development 20.00 18.39 28.57 17.53 Social Protection 66.67 16.16 66.67 16.13 Transport 37.50 17.60 50.00 18.15 Urban Development 55.56 17.03 50.00 16.23 Total 70.95 2.93 66.93 2.96 40 Table 13: Summary statistics on perceived value of WB research to work across Vice- Presidencies Vice-Presidency Average value of research High value attached to (10 point scale) research (% rating 5 or higher) Mean St. error Mean St. error Regional VPUs Sub-Saharan Africa 5.34 0.25 61.70 5.07 East Asia and Pacific 5.37 0.32 58.46 6.18 Eastern Europe and Central Asia 5.31 0.32 64.41 6.31 Latin America and Caribbean 5.59 0.30 67.12 5.56 Middle East and North Africa 5.92 0.50 70.83 9.39 South Asia 5.74 0.35 64.00 6.87 “Sectoral” VPUs Finance and Private Sector 6.06 0.75 72.22 10.68 Human Development 5.90 0.57 66.67 10.41 Operational Policy 5.47 0.48 58.82 12.08 Poverty Reduction and Econ. Mgt. 6.46 0.42 77.14 7.18 Sustainable Development 5.00 0.40 55.00 7.96 World Bank Institute 7.00 0.48 100.00 0.00 Total 5.61 0.11 65.51 2.09 Note: the totals include some other VPUs with insufficient sample sizes. 41 Table 14: Summary statistics on perceived value of WB research across sectors Sector Average value of research High value attached to (10 point scale) research (% rating 5 or higher) Mean St. error Mean St. error Agriculture and Rural Development 5.16 0.42 57.89 8.11 Economic Policy 6.24 0.21 80.37 3.89 Education 5.85 0.49 60.00 8.39 Energy and Mining 4.10 0.51 40.91 10.62 Environment 5.38 0.57 54.55 10.75 Finance and Private Sector 5.49 0.42 62.50 7.75 Health, Nutrition and Population 6.17 0.40 75.00 7.31 Poverty 7.37 0.37 93.33 4.61 Public Sector Governance 4.90 0.41 53.13 8.93 Social Development 5.46 0.44 69.23 12.96 Social Protection 5.47 0.58 64.71 11.74 Transport 4.88 0.72 60.00 11.09 Urban Development 4.89 0.49 52.63 11.60 Total 5.74 0.11 65.51 2.09 Note: the totals include some other sectors with insufficient sample sizes. Sectoral VPU in parentheses (Table 1), though note that sector staff are also in regional units. 42 Table 15: Allocation of World Bank operational staff across the categories in Table 1 Perceived value of WB research for the staff member’s work Low High “Voluntarily uninformed:” “Frustrated uninformed:” N=117 (22.54%) N=123 (23.70%) Low Mean familiarity=3.35 Mean familiarity=4.00 Staff member’s Mean value=2.72 Mean value=6.35 personal familiarity with “Independently well-informed:” “Functionally well-informed:” WB research High N=62 (11.95%) N=217 (41.81%) Mean familiarity=7.26 Mean familiarity=7.58 Mean value=2.90 Mean value=7.57 43 Table 16: Percentage distribution of types of staff across Vice-Presidencies Vice-Presidency “Voluntarily “Frustrated “Independently “Functionally Total uninformed” uninformed” well-informed” well-informed” Regional VPUs Sub-Saharan Africa 23.40 24.47 14.89 37.23 100.00 East Asia and Pacific 32.31 23.08 9.23 35.38 100.00 Eastern Europe and 25.42 27.12 10.17 37.29 100.00 Central Asia Latin America and 21.92 28.77 10.96 38.36 100.00 Caribbean Middle East and North 20.83 12.50 8.33 58.33 100.00 Africa South Asia 22.00 18.00 14.00 46.00 100.00 “Sectoral” VPUs Finance and Private 22.22 27.78 5.56 44.44 100.00 Sector Human Development 4.76 14.29 28.57 52.38 100.00 Operational Policy) 29.41 23.53 11.76 35.29 100.00 Poverty Reduction and 14.29 14.29 8.57 62.86 100.00 Econ. Mgt. Sustainable Development 30.00 20.00 15.00 35.00 100.00 World Bank Institute 0.00 63.64 0.00 36.36 100.00 Total 22.54 23.70 11.95 41.81 100.00 Note: the totals include some other VPUs with insufficient sample sizes. 44 Table 17: Distribution of types of staff across sectors “Voluntarily “Frustrated “Independently “Functionally Total uninformed” uninformed” well-informed” well-informed” Agriculture and Rural 34.21 15.79 7.89 42.11 100.00 Development Economic Policy 10.28 22.43 9.35 57.94 100.00 Education 22.86 8.57 17.14 51.43 100.00 Energy and Mining 40.91 27.27 18.18 13.64 100.00 Environment 36.36 18.18 9.09 36.36 100.00 Finance and Private 25.00 25.00 12.50 37.50 100.00 Sector Health, Nutrition and 11.11 27.78 13.89 47.22 100.00 Population Poverty 6.67 10.00 0.00 83.33 100.00 Public Sector 37.50 25.00 9.38 28.13 100.00 Governance Social Development 30.77 30.77 0.00 38.46 100.00 Social Protection 11.76 23.53 23.53 41.18 100.00 Transport 20.00 35.00 20.00 25.00 100.00 Urban Development 36.84 31.58 10.53 21.05 100.00 Total 22.54 23.70 11.95 41.81 100.00 Note: the totals include some other sectors with insufficient sample sizes. Sectoral VPU in parentheses (Table 2), though note that sector staff are also in regional units. 45 Table 18: Regressions for familiarity with research (1) (2) (3) Coefficient t-Statistic Coefficient t-Statistic Coefficient t-Statistic Intercept 4.205 10.855 2.107 5.478 2.184 4.120 Value of research for own work n.a. n.a. 0.421 11.734 0.307 6.888 Rating of Academia n.a n.a n.a n.a 0.140 2.519 Informal contact with researchers n.a n.a n.a n.a 0.136 3.417 Years of service with WB 0.197 4.222 0.191 4.771 0.152 3.237 Years squared -0.006 -3.554 -0.006 -3.915 -0.004 -2.527 Located in country office -0.305 -1.440 -0.565 -3.023 -0.639 -3.135 Agriculture and rural development 0.865 2.211 0.787 2.484 0.744 1.939 Economic policy 1.629 5.335 1.079 3.806 0.462 1.426 Education 1.677 3.660 1.301 3.393 0.707 1.545 Energy and mining -0.179 -0.312 0.283 0.532 -0.247 -0.409 Environment 0.564 1.072 0.527 1.224 -0.227 -0.388 Finance and private sector 0.579 1.353 0.454 1.236 -0.043 -0.107 Health, nutrition and population 1.063 2.458 0.678 1.719 0.089 0.198 Poverty 1.980 4.886 1.004 3.105 0.278 0.708 Public sector governance 0.721 1.539 0.815 2.062 0.388 0.874 Social development 0.172 0.463 -0.162 -0.468 -0.782 -2.071 Social protection 1.995 3.578 1.757 2.898 1.118 1.932 Transport 0.778 1.189 1.450 2.528 0.866 1.260 Urban development 0.104 0.168 0.116 0.219 -0.407 -0.739 Sub-Saharan Africa -0.381 -1.253 -0.153 -0.543 -0.204 -0.673 East Asia and Pacific -0.931 -2.666 -0.517 -1.827 -0.761 -2.392 Eastern Europe and Central Asia -0.848 -2.479 -0.615 -1.971 -0.673 -2.040 Latin America and Caribbean -0.791 -2.441 -0.707 -2.447 -0.899 -2.919 Middle East and North Africa -0.196 -0.443 -0.163 -0.476 -0.210 -0.618 South Asia 0.012 0.031 0.013 0.042 -0.327 -0.912 World Bank Institute -0.619 -0.741 -0.415 -0.527 -0.053 -0.052 Development economics 1.963 1.762 0.708 0.810 0.777 2.250 N 513 502 374 R2 0.174 0.363 0.399 Adjusted R2 0.134 0.330 0.352 S.E. of regression 2.123 1.813 1.700 Mean dependent var. 5.747 5.839 6.136 S.D. dependent var. 2.281 2.215 2.112 F-statistic 4.295 10.871 8.519 Prob.(F-statistic) 0.000 0.000 0.000 46 Acknowledgements At the time this paper was written the author was Director of the World Bank’s research department. For comments on the paper and other forms of help the author is grateful to Asli Demirguc-Kunt, Jean-Jacques Dethier, Marianne Fay, Sharon Felzer, Aart Kraay, Shiva Makki, Tamar Manuelyan Atinc, Will Martin, Berk Ozler, Mike Toman, Dominique van de Walle, Adam Wagstaff and the journal’s anonymous reviewers. Notes 1 An influential early paper arguing this point was Burnside and Dollar (2000), although Hansen and Tarp (2001) questioned their findings. For a critical overviews see Temple (2010) and Clemens et al. (2012). 2 Twenty years ago, the Wapenhans (1992) report laid out these concerns in forthright terms. Since then there has been an effort to do better through tighter quality control on projects at entry and better project implementation practices. But few observers would argue that these concerns are no longer salient. 3 A precise count is not available. My count excludes research assistants. 4 This is based on the IDEAS rankings using a composite index of 20 variables measuring publications, downloads and citations based on the RePEc data; the rankings and details can be found here. If one confines attention to the Bank’s research department then the University of Chicago alone does better. Also see the data on citations assembled in Ravallion and Wagstaff (2012). 5 External audiences are also important but are not the focus of this paper. For data and analysis on the impacts of World Bank research on external clients see Ravallion and Wagstaff (2010). 6 Although six responses came from DEC, presumably reflecting changes in VPU assignments. 7 The survey was carried out by Sharon Felzer and Jessica Cameron under supervision of Jean-Jacques Dethier at the World Bank. 47 8 These are based on the grade distribution for Operational Units at September 2011 (67.7%, 27.3% and 5.0% in grades GG, GH and GI+ respectively). 9 The other options allowed were: “Not applicable” and “Prefer not to answer.” All calculations in this paper filter out these responses as missing values. 10 Following Cohen and Levinthal (1990), the literature on innovation within firms has emphasized the role of internal knowledge and its diffusion; see, for example, Fabrizio (2009), which provides evidence using firm-level data that in-house research capacity reduces search costs for new innovations. 11 The calculation of the share of PhDs for GG+ staff was for 2009. The number of “economists” is measured by the number of staff who have the word “economist” in their job title. This excludes managers. 12 The total of this paid cross-support amounts to over 20 person years per year of direct support by researchers. However, that is still a very small share of the Bank’s staff—the total paid cross-support supplied by the Bank’s research department is only around 0.3% of all Bank staff. Indeed, only 1% of which are in the only department dedicated to research, although research is also done in other units. 13 ARD, EM, ENV, SDV. TRN and URB all map to SDN; EP, POV and PSG map to PRM; EDU, HNP and SP map to HDN. Some sectors were not explicitly identified by the survey, such as Gender. Note that the aforementioned data on PhDs, economists and cross-support are only available at VPU level, not by sectors. 14 Based on aggregates for the last three years for lending on “agriculture, fishing and forestry,” “energy and mining,” “transportation” and “water, sanitation and flood protection”; the sectoral breakdown of Bank lending can be found in World Bank (2011); the relevant tabulation is here. 15 I did this by allocating each staff member a primary and secondary sector in 2011—based on their research outputs—and giving two-thirds weight to the former. This requires a degree of judgment, but the allocation proved relatively easy. Note that the high proportion of staff in “economic policy” might be somewhat deceptive, since these are often researchers who are relatively fungible, and work across many development topics. 48 16 The assignment was done by Matthew Mulligan. The bulk of papers have Journal of Economic Literature Codes, which helped greatly in the allocation. 17 The specific question was as follows: “When considering all the sources of research globally, how useful is each of the following in terms of its research and knowledge have a positive impact on the quality of your AAA and/or lending operations?” 18 Across sectors there is a very high correlation between familiarity with Bank research and ratings of academia (r=0.81). It is less strong for consultants but still positive (r=0.31). 19 The correlation coefficients between individual ratings for familiarity with WB research and academia is 0.37 (significant at the 0.1% level) and 0.09 (only significant at the 12% level) respectively. 20 All but one of these sectors is mapped to SDN; the mean future reliance scores are 5.49 (s.e.=0.23) and 6.25 (0.14) for SDN and non-SDN sectors, and the difference is significant (t=2.85). Note that the staff in sectors mapped to SDN need not be in the SDN VPU; many are in regional units. However, there is very little difference in the score between staff in these sectors who are in the SDN VPU versus other VPUs; the mean scores are 5.52 and 5.47 respectively (and the difference is not significant; t=0.09). It is a “sector effect” rather than VPU effect. 21 N=11; the data were missing for WBI. 22 “Relevance” was qualified by “not too theoretical;” “accessible” was explained as “well written and easy to use” and “timeliness” was qualified as “the research was delivered in the necessary time frame to allow you to integrate into AAA and/or lending operation.” 23 Recall that 5+ defines “high” for the “value” variable, while it is 6+ for “familiarity.” 24 I tested for nonlinearity in this relationship by adding a squared term in “value” but the coefficient was not significantly different form zero. 25 Recall that the difference in mean familiarity scores is 1.29. Given that the difference in mean value attached to research for their work is 1.43, this implies a difference in mean familiarity with research of 0.60. 49