Evaluating complex interventions in international development
Date: 21 April 2021
Everyone knows that international development is complex and yet the evaluation methods used by researchers tend to ignore this complexity. Edoardo Masset will present the results of a CEDIL review of methods for evaluating complex interventions. The methods reviewed will include: adaptive trials, factorial designs, qualitative comparative analysis, synthetic controls, agent-based modelling, and system dynamics. All these methods have limitations but can be useful in some circumstances when applied to the evaluation of specific interventions.
Estelle Raimondo is the co-editor of a recent book on Dealing with Complexity in Development Evaluation, and is an evaluation expert at the World Bank’s Independent Evaluation Group and she will introduce current approaches and experimentation on the evaluation of complex interventions.
Peter Craig is the lead author of the influential MRC guidelines on the evaluation of complex interventions, which are currently being revised, and he will discuss the most recent developments. After their presentations the speakers will take questions from the audience. The event will be chaired by Rick Davies.
- Edoardo Masset, CEDIL Deputy director
- Peter Craig, MRC/CSO Social and Public Health Sciences Unit, University of Glasgow
- Estelle Raimondo, Independent Evaluation Group (IEG) at the World Bank
- Rick Davies, Monitoring and Evaluation Consultant
Edoardo Masset’s presentation: Download PDF here
Estelle Raimondo’s presentation: Download PDF here
Peter Craig’s presentation: Download PDF here
Questions from Webinar:
(Text taken directly from webinar report)
What might be potential reasons for adaptive design?
Many reasons: dropping interventions that don’t work, ethical reasons, picking-the-winner intervention among many…
Given the underlying premise of complex systems such as that which characterizes the international development (e.g. pursuit of SDGs), chnage is emergent and unpredictable, therefore designs should be adaptive rather than straightjacketing reality. For instance, the election of a new political administration, the emergence of a pendemic, or a recession can all be game changers that require stakeholders to revisit and adapt an intervention design.
Is the agent-based modeling and system dynamics statistical empirical analysis? Or is it based on qualitative approach?
Statistical/empirical and model-based
You made mention of using synthetic control for ‘long-term’ interventions. What does ‘long-term’ mean in years?
SC is statistical approach, you need say at least 20 years of observations which at least five of intervention, but the more the better
Is there opportunity from AI and Machine Learning to address the challenges of making predictions when there is too much complexity?
Surely there is but we did not cover this in our review
Most modelling is for foresight, not for evaluation purposes. But there a far more relevant modelling approaches to address complexity (in addition to ABM), such as structural equation modelling (SEM) and interactive multi-level modelling (IMGLP). How do you judge their relevance ?
SEM have many feedbacks and IMGLP is basically focusing on trade-offs between multiple objectives
The presentations by Edoardo & Peter distinguished sharply between design and implementation. But complexity leads to uncertainty and the importance of learning through evaluation during implementation, redesigning or adjusting the design. So what is needed is dynamic evaluation, which can also generate evidences that can be used as inputs for independent evaluations such as those that Estelle presented.
It is of key importance to improve reliability and credibility of impact evaluations