Beyond donor ‘success’ or ‘failure’: a new tool to classify development project outcomes
Rogla, Jennifer (2018). 'Beyond Donor ‘Success’ or ‘Failure’: A New Tool to Classify Development Project Outcomes' Paper presented at the annual conference of the HDCA, Buenos Aires, Argentina 2018.
Globally, demands for transparency on development aid outcomes by constituents and international organizations are on the rise, yet the literature clearly demonstrates we lack micro-level project data, while our macro-level national data is plagued by measurement difficulties (Savedoff et al., 2006; Waddington et al., 2015; Hoey, 2015; Levine and Savedoff, 2015a/b; Bell and Squire, 2016). Although project-level measures offer more accurate outcome data, they still rely on donor interpretations of success or failure in reaching donor-defined goals (Honig, forthcoming 2019). Additional project measurement challenges discussed in the literature include context and time dependency, and changes in substance and relevance of measures depending on who measures. Finally, both macro and micro-level measures require taking baseline measurements, implementing the ‘treatment’ – i.e. distribute aid, begin a project – then measuring after. However, the capability approach reminds us that development indicators are still subject to local, domestic, and international forces that may have nothing to do with the project – leaving us no way to separate out project effects. For example, in a project helping farmers build coffee businesses, coffee prices are still subject to world market prices. We might assume the project failed to help the farmers, when instead it helped sustain the business during a downturn in demand. Yet, Rocha de Siqueira (2016) argues that ‘good enough’ numbers are now the norm in development policy, and international organizations find such numbers acceptable due to allocation constraints on scarce resources, even if severely flawed.
We undoubtedly must change the way we think about project outcome measurement. I propose a new tool for operationalizing project outcomes that avoids past pitfalls and the trap of ‘good enough’ numbers by focusing on the institutionalization of project-desired changes. There is a clear awareness in the development and participatory governance literature that changes must be institutionalized to have a long-term impact. Institutions are sets of rules and/or processes, and institutionalization is the creation of institutions in a target population. It can occur formally, via written codes and laws, or informally, via norms. The words policies and norms make frequent appearances in development literature as tools for promoting the changes most initiatives seek, though their creation or change is not necessarily measured (see Friedmann, 1998; Martinsson, 2011; Fukuda-Parr, 2011; Poku & Whitman, 2011; Fukuda-Parr, 2012a/b; Swiss, 2012; World Bank, 2015; Imbach, 2016). Yet focusing on the institutionalization of project outcomes – changes occurring around project activities – could avoid some of the above measurement challenges. For example, if we only look at outcomes on ‘counting’ indicators (i.e. X number of people attended a meeting, X people treated for malaria, etc.), we have no information if those changes will last over time. An institutionalization measure evaluates if change occurred amongst the target population, and if so, to what degree and how.
The analysis of institutionalization is not exclusive to international relations. Subjects from pop culture trends to educational curriculum involve the institutionalization of new formal and informal rules. Thus, this tool is based on previous attempts in several disciplines to characterize, analyze, and/or measure the phenomenon of institutionalization. Drawing on the work of economist North (1990), political economist Ostrom (1990), educational experts Miles (1987), Miles & Louis (1987), Curry (1992), Fullan (2007), and sociologists Colyvas & Powell (2006), I argue we can categorize institutionalization into four arenas with observable changes: two arenas at the group level (rules and resources) and two at the individual level (behavior and attitudes). Using field research to evaluate the outcomes of three foreign aid projects executed between 1990-2011 across Costa Rica representing a variety of sectors, donors, and initiation approaches, I describe the tool and show the utility of this measurement approach in practice.
This tool has enormous potential. Firstly, an institutionalization measure can further the capability approach. A unique aspect of the capability approach is that it allows us to not only analyze the functionings of individuals – what they do or are – but also account for the technologies that allow individuals to expand their potential functionings or capabilities, how those technologies convert into capabilities, and the freedom they have to decide what capabilities they value and ultimately use (Alkire 2005; Robeyns 2005). A common design problem in development projects is that they often focus on achieving one definition of ‘the good life’ instead of creating space for people to define their own ‘good life’ based on their values (Steen 2016). Thus, traditional before-and-after project indicators focus on measuring changes in technologies or achieved functionings/behavior towards achieving ‘the good life.’ But an institutionalization measure can additionally capture changes to community rules and resources that occur during a project, which affect how technologies are converted into individual capabilities and/or the freedom to pursue one’s own definition of ‘a good life.’ Furthermore, as conversion factors and freedom of choice depend heavily on one’s macro-environment (Deneulin 2005; Oosterlaken 2009), this tool can capture forces external to the project that are simultaneously affecting project outcomes and help separate out the project's effects.
Secondly, the tool avoids controversial terms of project ‘success’ and ‘failure,’ so that outcomes desired by one actor are not presented as desirable by all. Thirdly, it can be adapted to local contexts. As so many in development have argued, there is no one-size-fits-all solution, and traditional counting indicators often ignore context-specific goals, barriers, and comparative advantages that a measure focusing on institutional change can capture. Fourthly, it can be used to evaluate domestic development projects. Finally, the approach can be scaled up. For example, we can assess the number of projects aimed at a particular behavior change and evaluate whether that new behavior is observed on a regional or national level. We have very little idea how project results might aggregate up to macro level outcomes, but this will give us some idea whether projects have ripple effects, and whether changes have diffused to non-project areas. This tool is part of my broader dissertation, which will attempt to relate the duration of project outcomes to the type of local actor incentives present.