List of authors: Davey, C., Hassan, S., Bonell, C., Cartwright, N., Humphreys, M., Prost, A., Hargreaves, J.
Abstract: We currently do not learn as much as we could from evaluations. While there are well-established methods for determining, and understanding, the effects of simple interventions in one set of places, it is less clear how to learn the most possible from evaluations of context-specific, complex interventions, and apply what we learn to other contexts. This is especially important in international development: evaluations are limited by time, cost and opportunity, and there is substantial heterogeneity in the issues and contexts within which work is undertaken. We consulted with evaluation experts in the Centre for Evaluation at LSHTM and an interdisciplinary group from the Intellectual Leadership Team at CEDIL.
The consultations identified gaps between established methods and their use, and gaps in available methods. Gaps in use included approaches to interpreting sub-group analyses, identifying unintended effects, and using theories of change. Gaps in methods included choices of non-randomised designs, and how to conduct mediation analysis. However, the pre-eminent gap identified was how best to use findings from evaluations to inform policy decisions. We argue that effects of complex interventions are often highly context-dependent and that the theories that describe interventions in context are what could be transported from one place to another to maximise learning from evaluations. Methods are lacking for how to develop and test such theories, and how to know that a theory developed and strengthened in one place and time is appropriate to inform action elsewhere.