Howard White | 13 March 2020
Twenty years ago, I was part of a team conducting a thematic evaluation of the poverty impact of British aid. The terms of reference for the evaluation did specify that we should consider the use of micro data to look at poverty impact at project level, but the team rejected that approach as infeasible. Ten years later the evaluation department of DANIDA convened a workshop in impact evaluation of infrastructure evaluations (the papers of which are published in a special issue of JDEff). The head of evaluation from a major development agency drew a basic causal chain; he drew a line between outputs and impact, declaring ‘it is not possible to cross that line’.
Of course by 2010, people were crossing the line in increasing numbers and we have come a long way since then. The 3ie Development Evidence Portal contains nearly 4,000 impact evaluations of development interventions.
But whilst there is more and more evidence of what works, there are important gaps. There are geographic gaps: nearly 200 impact evaluations have been carried out in Uganda but only 15 in the far more populous Egypt, and only one in Somalia. There are also sector gaps: over 1,600 of the studies are on health compared to just 51 on energy and extractives, and 27 on transport. And, most importantly, there are methods gaps. Current impact evaluation designs are not suitable for assessing the causality of packages of programmes – say what is the effect of UK aid to Afghanistan in the last 10 years on governance outcomes – or to unpack synergies between programme components, or to address small n interventions like technical support to a single agency, support to a multi-donor sector programme or policy dialogue over a number of years.
The Centre of Excellence for Development Impact and Learning (CEDIL) was set up to fill these gaps. Research, captured in a series of pre-inception and inception papers, and consultation workshops with a broad range of experts, led to the identification of research focus areas, which we call programmes of work. We ran a request for proposals to undertake research under these programme areas and received a great response. I am now very pleased to announce awards to 25 projects, which includes evaluations, secondary data analysis, evidence syntheses and exploratory studies, covering a broad range of sectors and countries. I am also pleased to say that the projects are spread across the following three programmes of work which we identified as focus areas:
- Evaluating complex interventions: CEDIL is supporting research that will develop new approaches to assess the effectiveness of complex interventions, particularly in challenging contexts, such as fragile and conflict-affected states. The studies use innovative designs and combine multiple methods, such as traditional surveys and interviews with biomarkers or satellite imagery and information on weather, agricultural productivity, conflict and so on, to assess the causality of complex interventions and generate new insights on the pathways to impact.
- Enhancing evidence transferability: The transferability and applicability of research findings from one setting to another (i.e. external validity) has become a major concern in the field of impact evaluation. CEDIL encourages researchers to use middle-range theory, which sits between grand theory and project-level theory, to identify sufficiently general causal mechanisms and their supporting assumptions to assess the transferability of programme designs. CEDIL-supported studies are combing multiple methods, including machine learning, to unpack how interventions work and analyse why effectiveness varies between contexts.
- Increasing evidence use: CEDIL is ultimately interested in evidence being used to improve lives. The projects in this area will review different approaches to promoting evidence-informed decision-making, produce guidance and identify effective methods to use an evidence base to make policy decisions in different contexts.
Over the next few years, these projects will produce new evidence that will fill key gaps in priority sectors. Policymakers, programme implementers, donors and evaluators will have a wider evidence base and new tools to make decisions about development impact. These studies will break new ground in terms of innovative designs for evaluating complexity, improving the transferability of research findings and promoting evidence-informed decision-making.
Howard White is CEDIL’s research director and CEO of the Campbell Collaboration.
For updates on CEDIL projects, sign up to join our mailing list.