CEDIL Inception Paper 1
The final paper will be available on the website in the near future
List of authors: Webster, J, Lewis, J, Exley, J, Copestake, J, Masset, E, Hargreaves, J, Davies, R
The success of all interventions and programmes depends on how well they are implemented. Data from evaluations, including process evaluations, are generally available retrospectively, giving learning ‘for next time’. This is important, but misses opportunities to improve interventions and programmes as they are delivered. Programmes are increasingly designed to include formative evaluation to inform intervention design, and/or concurrent evaluations of early outcomes and impact monitoring. Recruitment of evaluation teams occurs at programme inception, and new technologies may be used to deliver data in real-time to inform programme implementation. In some cases data may be scarce and a key question is what data are most important to acquire in a timely fashion. The use of timely data for effective programme improvement requires rapid data collection and strong adaptive learning mechanisms. The data has to be of high quality, and reach the right people who understand it as presented and have decision-making power to act on it. Guidance on programmatic change must then be produced and transferred back to implementers who are able and willing to put the changes into action. The adaptive learning mechanism itself is a complex intervention that needs to take place effectively in both experimental and routine settings. Therefore, questions need answering on when and how to be rapid in collecting data and adaptive in its use, and how to evaluate whether the use of timely data is successful in improving the impact of programmes in order to assess the value of these approaches.
In this paper, we aim to recommend methods for using and evaluating the impact of timely data and adaptive learning on programme improvement. The Centre for Evaluation will hold two consultations to support this. The first event will be an internal scoping exercise among LSHTM staff members, and the second will be a symposium which will include LSHTM staff, CEDIL ILT members, Centre for Evaluation external partners, and funders. The objectives of these events is to explore experiences in this field within the international development field, assess perceptions of methodological gaps and make suggestions for closing the gaps. We will conduct a scoping review to identify programmes where adaptive learning from timely data occurs, and assess whether and how evaluations of the impact this has on programme improvement is done, critiquing the methods used and illustrating best-practices using retrospective case-studies. We will seek to define impact evaluation and process evaluation (based on the MRC guidelines for process evaluation) for adaptive learning and make recommendation for applying principles derived from this work to potential upcoming DFID evaluation scenarios. This might include evaluations in FCAS / emergency settings, evaluating intervention bundles, evaluating aid delivery mechanisms, or other relevant settings that emerge during consultations.
This paper offers the potential development of a structured method for evaluating the use and success of adaptive learning to improve projects, and therefore to increase value for money of the projects. It can be used to touch on the handling of fluid relationships with multiple implementing partners including in humanitarian emergency contexts.