Gaps in Evaluation Methods for Addressing Challenging Contexts in Development

List of authors: Davey, C., Hassan, S., Bonell, C., Cartwright, N., Humphreys, M., Prost, A., Hargreaves, J.

Abstract: We start this paper by emphasizing that that we currently do not learn as much as we could from evaluations. While there are well-established methods for determining, and understanding, the effects of simpler interventions in one set of places (i.e. internal validity), it is less clear how to learn the most possible from evaluations of context-specific, complex, interventions, and apply what we learn to other contexts. This is especially important in international development where evaluations are limited by time, cost and opportunity, and where there is significant heterogeneity in the issues and contexts within which work is undertaken.

Using examples and case studies throughout, we outline several gaps in evaluation methods that if addressed, could allow us to learn more. First, we argue that an important gap is the failure to combine the analysis and interpretation of process and outcome data, and illustrate the benefits of doing so. We then highlight principles that could be adapted to guide the integration from two methodological frameworks from other research fields, and discuss Bayesian modelling as a potential method that could be employed. Second, we place this gap within an evaluation approach, which relies on developing “midlevel” theories, and using data from evaluations to test and refine these theories to allow for knowledge from one setting to be transported to others. Finally, we identify further gaps and the challenges that confront this evaluation approach.

https://doi.org/10.51744/CPIP4

Leave a Reply