CEDIL - Centre for Evaluation Lecture Series

CEDIL – Centre for Evaluation Lecture Series

The Centre of Excellence for Development Impact and Learning (CEDIL) and the Centre for Evaluation host a lecture series addressing methods and innovation in primary studies.

Download the CEDIL CfE-Lecture-Series Calendar. Most lectures will be live-streamed, recorded, and posted on this site later.

Previous lectures are listed below


Parenting for Lifelong Health: Programme Optimisation and Scale-Up

Date :  24 June 2020

Speaker : Dr Jamie M Lachman & Dr Yulia Shenderovich

About the lecture

Parenting for Lifelong Health (PLH) is a suite of parenting programmes designed to offer parenting support and reduce violence against children in low- and middle-income countries. The PLH programmes have been evaluated in a number of randomised trials in South Africa, Thailand, Philippines, El Salvador, and Lesotho, with positive effects on a number of child, caregiver, and family outcomes. The presentation will focus on two current research projects aimed at increasing our knowledge on the implementation and scale-up of parenting programmes and other family-based interventions in low- and middle-income countries. First, we will present how the RISE study (www.rise-plh-eu) is applying the Multiphase Optimisation Strategy framework (MOST) to optimise PLH for Young Children for scalability by identifying the most effective and cost-effective components related to programme implementation in North Macedonia, Moldova, and Romania. Second, we will discuss how the Scale-Up of Parenting Evaluation Research (SUPER) study is examining the implementation and scale-up of PLH programmes in over 20 countries around the world for more than 400,000 beneficiaries. We will describe the research question and methods we plan to use to explore programme use in routine service delivery. We will also describe how the COVID-19 pandemic has impacted these studies and the delivery of PLH programmes, and how we have adapted the content for global dissemination in collaboration with UNICEF, WHO, CDC, USAID, and other partners reaching 32 million families in over 173 countries.

The lecture can be viewed by following this link: Click here to view the recording




About The Presenters

Jamie is a research associate at the University of Oxford Department of Social Policy and Intervention and a research fellow at the University of Glasgow MRC/CSO Social and Public Health Sciences Unit. He is also the founder of Clowns Without Borders South Africa and a co-founder of the Parenting for Lifelong Health initiative. He has over 15 years of experience developing, testing, and scaling up family and parenting programmes to reduce violence against children and improve child wellbeing in over 25 low- and middle-income countries for more than 500,000 beneficiaries. He is also a storyteller, banjo-player, songwriter, facilitator, and clown.

Yulia is a post-doctoral researcher at the Department of Social Policy and Intervention at the University of Oxford, working on issues of violence affecting young people in low- and middle-income countries (LMICs), with a focus on implementation and scale-up of violence prevention programmes. Yulia has worked on programme evaluations of parenting and school education programmes in South Africa and the UK.


Impact of Impact Evaluations

Date :  17 June 2020

Speaker : Richard Manning & Ian Goldman



About the lecture

In 2006 the Center for Global Development’s report ‘When Will We Ever Learn? Improving lives through impact evaluation’ bemoaned the lack of rigorous impact evaluations. The number of impact evaluations has since risen (to over 500 per year), as have those of systematic reviews and other synthesis products.

We researched international organizations and countries, including Mexico, Colombia, South Africa, Uganda, and Philippines, to understand how such products are being implemented and used, and what facilitates or inhibits their use.

While we see definite progress, we find that:

  • Impact evaluations are too often donor-driven, and not embedded in partner governments.
  • The willingness of policymakers to take evidence seriously is variable
  • The use of evidence is not tracked well enough
  • Impact evaluations should be seen within a broader spectrum of tools that support policymakers
  • Those who commission them need to learn from good practice in maximising the prospects of use

Slides of the full presentation (including additional slides not included in the original lecture) can be found here: Click here to download the presentation

Link to recording: Click here to go to recording

(Please note, unfortunately the beginning of the lecture is missing)



About The Presenter

Richard served in DFID and its predecessors from 1965-2003. He was Chair of OECD’s Development Assistance Committee from 2003-2008, since when he has worked as a consultant. He has also been Board Chair of both the Institute of Development Studies and 3ie.


Ian is an Advisor on Evaluation and Evidence Systems, CLEAR Anglophone Africa, and former Deputy Director General in the Department of Planning, Monitoring and Evaluation, South Africa.

Using big data for evaluating development outcomes: lessons for evaluation during COVID

Date :  10 June 2020

Speaker :  Francis Rathinam and Xavier Vollenweider

Discussant: Federica Di Battista

About the lecture

Significant data gaps remain in monitoring and evaluating development outcomes. Big data—that is digitally generated, passively produced and automatically collected—offer a great potential for answering some of these data needs. The use of big data in evaluation has become ever more relevant after the COVID-19 pandemic outbreak, which has severely limited the researchers’ opportunity to collect data in the field with traditional methods. In the talk, the authors will present a systematic map highlighting how big data are being innovatively used in measuring and evaluating development outcomes. The authors will also discuss the risks, biases and ethical challenges in using big data. The presentation will offer an opportunity to discuss what tools and technologies are available to conduct evaluation in the time of COVID.

Watch the recording here: Link to Recording

Francis Rathinam’s slides: Link to Slides

Federica Di Battista’s slides: Link to Slides

Xavier Vollenwieder’s Slides: Link to slides

Please note Xavier’s slides do not include some material shared in the lecture.



About The Presenter


Francis manages 3ie impact evaluations across a wide spectrum of development issues, including transparency and accountability, governance and social protection. He also has been working closely with some of 3ie’s member country governments to build capacity to use and institutionalise impact evaluations. Francis is currently leading on 3ie’s systematic map of studies that used big data to innovatively evaluate development outcomes. Francis holds a PhD from University of Hyderabad, India.

Xavier is an economist with Flowminder and has a background in environmental and development economics. His interest lies in using novel data sources and techniques to characterise poverty dynamics, climate vulnerability and the adoption of mobile financial services. He received his PhD from the London School of Economics.

Federika is an Evaluation Advisor at DFID and the Evaluation Unit’s Trialing Lead, where she is responsible for the development and management of new and existing programmes that focus on experimental methodologies to conduct impact evaluation. She worked for several years in Ghana managing a portfolio of impact evaluation research studies, mainly focused on agriculture and rural development, and holds a PhD from Tor Vergata.




Using social science theories to design and evaluate development programs.

Date :  20 May 2020

Location : Virtual

Speaker :  Annette Brown


About the lecture

Too often those who design programs or evaluations use logic models or results frameworks that rely on causal relationships or mechanisms of change that are assumed. That is, they assume that good activities will lead to good outcomes without considering the social science theories that may (or may not) predict those relationships. Understanding the relevant social science theories is not just crucial for making the right prediction about how program activities will produce outcomes, it is also necessary for identifying what situational assumptions are needed for the prediction to hold. This lecture will give examples of theories in psychology, economics, and political science used to design interventions and explore how these theories have been tested in the field and what we have learned about whether and how they work. It will also include some recommendations for those who want to use theory in their work.

The lecture was recorded, and can be viewed at this link: Link to recording

To view the slide show please download a copy of the slides here: Slideshow of lecture




About The Presenter

As the Principal Economist for FHI 360, Annette Brown leads efforts to build an organizational culture of evidence generation and use across all FHI 360 sectors and regions. She also serves as editor-in-chief for the R&E Search for Evidence blog. Prior to joining FHI 360, Brown headed the Washington, DC office of the International Initiative for Impact Evaluation. Earlier in her career, Brown worked at both for-profit and not-for-profit development implementers and was an Assistant Professor at Western Michigan University. She earned her Ph.D. in economics from the University of Michigan where she was a National Science Foundation Fellow.

Ten Steps towards the Construction of a Middle-level Theory

Date :  13 May 2020

Location : Virtual via the Collaborate platform

Speaker :  Professor Nancy Cartwright

About the lecture

Middle-level theory has several uses. It can help predict if a programme might be expected to work in a particular setting. It offers insights into what programme design features are needed to help ensure success. It provides invaluable information for monitoring the programme to see if it is on track as time progresses and for fixing some of the problems that arise. It also reveals the causal processes and related assumptions to be tested in an evaluation. This in turn can help in identifying evaluation questions. Finally, the theory can help in interpreting evaluation findings and assessing their relevance and locating a description of them that can be helpful for programme design and evaluation in other settings. This talk will illustrate the construction of a middle-level theory of change to serve these purposes, in 10 steps.

Watch the recording here: Link to recording

Slides for the lecture: PDF of slidess

About The Presenter

Nancy FAcSS is Professor of Philosophy at Durham University and a Distinguished Professor at the University of California, San Diego (UCSD). In the first half of her career at Stanford University she specialised in the philosophy of the natural sciences, especially physics; in the second half, at the London School of Economics and now Durham and UCSD, she has specialised in philosophy and methodology of the social sciences with special attention to economics. Her current research focusses on objectivity and evidence, especially for evidence-based policy.

Meta-ethnography to build middle-range theories: an exploration in three case studies

Date :  6 May 2020

Location : Virtual

Speaker :  Audrey Prost

About the lecture

This lecture will explore the use of meta-ethnography to build middle-range theories that support the development and evaluation of interventions.

It will draw on three examples from public health, education, and sustainability research.

Slides for the lecture can be found here: Presentation slides

Watch the recording here: Link to Recording

About The Presenter

Audrey is Professor of Global Health at University College London’s Institute for Global Health. Her work focuses on the design and evaluation of participatory interventions to improve women’s, children’s and adolescent health, particularly in India.

Using Evidence in Policy and Practice – lessons from Africa.

Date :  22 April 2020

Speaker :  Ian Goldman, Mine Pabari and Laurenz Langer

About the lecture

This lecture draws from eight cases studies of evidence use in Africa, to draw out an lessons for promoting evidence use by government. The research is being published in a book available in June.

The presentation can be found here: Presentation Slides

The recording of the lecture can be found here: Recorded Lecture

About The Presenters

Ian is an Advisor on Evaluation and Evidence Systems, CLEAR Anglophone Africa, and former Deputy Director General in the Department of Planning, Monitoring and Evaluation, South Africa.


 Mine is currently a visiting research fellow with CLEAR Anglophone Africa and the managing partner of Athari Advisory.


Laurenz is a Senior Researcher at the University of Johannesburg’s Africa Centre for Evidence (ACE) specialising in evidence synthesis and its use to inform decision-making.


Trials and tribulations of collecting evidence on effectiveness in disability inclusive development

Date :  11 March 2020

Location : London

Speaker :  Professor Hannah Kuper






About the lecture

There are at least one billion people with disabilities globally, and they are falling behind in all measures of development, whether education, employment or poverty. This gap is an important issue not just in terms of development, but because it represents a violation of human rights, and has negative impacts on the lives of people with disabilities and their families.

Action is therefore urgently needed to close these gaps between people with and without disabilities. But there are important hurdles to overcome in this mission. The current evidence base is (very) poor on what works and what does not in terms of disability-inclusive development. More evidence is therefore needed, but there are methodological issues in collecting this data, including how to measure disability, which outcomes are most important, and how to meaningfully include people with disabilities in this research. Another concern is that the mechanisms by which we can use evidence on disability-inclusion to inform policy and practice are weak, and little funding is available to support disability inclusion.

This talk will discuss these issues, providing practical examples. It will introduce PENDA – a new DFID-funded initiative to collect evidence on disability-inclusive development – and describe how PENDA is trying to overcome the hurdles to improving evidence quality and availability. The lecture will be livestreamed, please click on the link to go to the recording: Livestream Link






About Hannah Kuper

Hannah is Director of the International Centre for Evidence in Disability, a research group at the London School of Hygiene & Tropical Medicine (LSHTM). Her main research interest is in disability in low resource settings, in particular: 1) Measuring disability and impairments; 2) Exploring the impact of disability on health and well-being; 3) Investigating the health and rehabilitation needs of people with disabilities, and how these can be met. She has an undergraduate degree from Oxford University in Human Sciences and a doctorate from Harvard University in epidemiology. She has worked at LSHTM since 2002.

Measuring the 'hard to measure' in development: abstract, multi-dimensional concepts and processes

Date :  04 March 2020

Location : London,

Speaker :  Dr Anne Buffardi





About the lecture

Development is a multi-dimensional, imprecise concept. Initiatives that aim to improve development attempt to address entrenched economic and social issues, increasingly through multi-component programmes, involve diverse sets of stakeholders pursuing different, sometimes competing interests, and must adapt to shifting contexts. They operate under conditions of uncertainty and complexity. Each of these factors poses challenges for measurement validity and reliability.  Based on common challenges that arose through development initiatives, we identified four hard-to-measure dimensions of development: abstract, multi-dimensional concepts, processes, and issues; challenging settings where there are unpredictable, sudden, or frequent shifts in the environment; multiple, uncertain pathways of change; and multi-layer implementing structures.  This lecture focuses on the first dimension, discussing construct validity and three examples of multi-faceted concepts: evidence-informed decision-making, youth transitions to adulthood and human rights-based approaches to development.




About Anne Buffardi

Anne is a Senior Research Fellow at the Overseas Development Institute (ODI).  Her research examines how different stakeholders engage in different phases of the policy process, including their use of different types of evidence to inform decision-making and development policy and practice.  Anne has worked on multi-component, multi-site initiatives in Africa, Latin America, Southeast Asia and emerging economies, predominantly on global health and governance issues.

Five Challenges In The Design And Practice Of Implementation Science Trials For HIV Prevention And Treatment

Date :  19 February 2020

Location :  London

Speaker :  James Hargreaves



About the lecture

Identifying programme implementation strategies that most effectively strengthen the HIV treatment and prevention cascades in Africa is a pressing global priority. Rigorous trials that compare outcomes under different strategies have a role to play. However, there are challenges associated with making such trials: Feasible to undertake, Useful for onward policy making, Rigorous and unbiased, Relevant to “real-life” and Informative. In other words, they need to be FURRI, but making them so is not simple. I will discuss these challenges (and some possible solutions), with examples from the field of HIV prevention and treatment.

The lecture will be live streamed and can be found at this link from the start of the lecture: Live Recording

About James Hargreaves

James is Professor of Epidemiology and Evaluation at the London School of Hygiene and Tropical Medicine

James Hargreaves close up


Using Evidence in Humanitarian Decision-Making

Date :  Wednesday 29 January 2020

Location : London

Speaker :  Sheree Bennett



















About the lecture

IRC has made a commitment for 100 percent of its interventions to become evidence-based or evidence-generating in all its programming by 2020. To achieve this, IRC undertook an extensive program of evidence mapping to develop the Outcomes to Evidence Framework (OEF), a tool that clearly defines the outcomes IRC aims to achieve and corresponding pathways or theories of change and synthesizes the evidence for what works to achieve these targeted outcomes. This talk will focus on the OEF and other efforts to increase the use of evidence, the challenges and successes experienced to date and lessons for other agencies wishing to adopt a more evidence-based approach.

A live recording can be viewed here: Live Lecture

A copy of for the slides can be found here: Presentation Slides


About Sheree Bennett

Sheree Bennett is a Senior Research and Evidence Advisor at The International Rescue Committee (IRC) where she leads the organization’s global strategy to increase the use of evidence strategic and programmatic decision-making. She supports technical and country program teams across the organization to identify, critically appraise and apply evidence in actionable and contextually appropriate ways. This includes making technical shifts or adapting country-level program portfolios based on best available evidence, cost data and contextual considerations. She also supports the development of IRC’s growing research agenda in Europe where she develops academic partnerships and manages research and evaluation of IRC programs in governance, peace-building, economic development, social and political integration. She previously help positions as a Research Advisor for Governance programs and then the Evidence to Action Advisor at the IRC. She has over 12 years of research and evaluation experience with a focus on the politics of service delivery, peace-building and participatory approaches to local development.


How the Global Innovation Fund uses impact forecasts to guide investment decisions

Date :  Wednesday 4 December 2019

Location : London

Speaker :  Ken Chomitz
















About the lecture

The Global Innovation Fund invests in early-stage innovations, public and private, that have the potential for large social impact at scale.  GIF seeks to maximize that impact. It does so with an impact forecasting and methodology tailored to its evidence-based, venture-capital-like approach.  The “Practical Impact” methodology is distinctive in applying a universal impact metric to all outcomes, in projecting long-term impact, and in adjusting impact for risk.

The talk will explore the design philosophy behind Practical Impact, the way it incorporates evidence, how it is implemented, and plans for future elaboration.

The lecture was recorded, however, technical difficulties meant we last the beginning of the lecture. The rest can be found at this link: Lecture Recording









About Ken Chomitz

Ken Chomitz joined the Global Innovation Fund as Chief Analytics Officer in 2016, after a distinguished career in research and evaluation at the World Bank. As Senior Advisor in the World Bank Group’s Independent Evaluation Group, he led major evaluations of the Bank’s efforts in energy policy and climate change. . He is a co-author of the 2016 World Development Report, Digital Dividends, where he wrote on the implications of data and technology revolutions for development practice.

Chomitz holds an SB in mathematics from MIT and a PhD in Economics from the University of California, Irvine.  Prior to joining the World Bank, he was a National Research Council Fellow; Assistant Professor of Economics at Boston University; and Senior Advisor with the Development Studies Project, a Jakarta-based policy advisory group.


Turning 'evidence for development' on its head

Date :  Wednesday 30th October 2019

Location : London,

Speaker :  Ruth Stewart












About the lecture

In an era of decoloniality, post-‘development’, and antipatriarchy, the evidence-based movement in the North is failing to move with the times and as a result is outdated and risks being ineffective. Living and working in the global South, I experience a world in which innovation in evidence-informed decision-making and its related methodologies are necessary, routine and inspirational, and yet they are largely ignored by the global North. Whilst resource-poor, and not well publicised, the evidence community across Africa is world-leading in a number of respects. This lecture is a call to arms for all those who want to ensure that better evidence leads to better decisions and to better futures for those living in resource poor environments. It proposes a new lens through which to view ‘evidence for development’. It celebrates the successes of Southern evidence communities, achieved largely in spite of, and not because of, Northern good intentions. The recording can be watched by going to this link

A copy of Ruth’s slides can be found here: Ruth’s Presentation Slides





About Ruth Stewart

Prof Ruth Stewart is  Director of the University of Johannesburg’s Africa Centre for Evidence and chairperson of the Africa Evidence Network. Having grown up in Malawi, she has been working in South Africa since 1998. She has a background in social sciences, and has worked across academia and government. Her work includes the production of evidence for decision-makers, as well as supporting civil servants to access and make sense of research.




Making data reusable: lessons from replications of impact evaluations

Date :  Wednesday 9th October 2019

Location :  London

Speaker :  Marie Gaarder and Sayak Khatua






About the lecture

In recent years, efforts to replicate the findings in scientific studies indicate that many results cannot be verified. In other words, reported findings cannot be reproduced using the original dataset and analysis code. The ‘replication crisis’ (as it has come to be known) appears to be a cross-disciplinary challenge. While this has led to a call for more replications, in practice there are few incentives for doing so. In the international development sector, there is an emphasis on developing interventions and policies that are grounded in rigorous evidence. Given the limited resoures available to tackle large scale challenges, it is imperative to ensure policymaking and programming draw upon lessons learned from evaluations of development interventions. But, given the replication crisis, how reliable is this evidence?

In its role as a producer and synthesizer of evidence, the International Initiative for Impact Evaluation (3ie) funds impact evaluations of development interventions and policies in low- and middle-income countries. In 2018, we embarked on a project to replicate published evaluation results using the data and analysis code submitted by evaluation teams. The talk will present the findings from this effort and discuss lessons learned and possible recommendations for various actors, hopefully with active participation from the audience.

This lecture will be livestreamed at this address, please click here to listen

Slides for the lecture can be downloaded by clicking here



About Marie Gaarder

Marie Gaarder provides general leadership, strategic direction and guidance to 3ie’s work in evaluation, synthesis, innovation and country engagement, in addition to overseeing the evaluation and synthesis office. Marie has over 19 years of experience managing operational and research projects with a development focus.

Marie is the co-chair of the International Development Coordinating Group within the Campbell Collaboration, a member of the Research Ethics Review Committee of the Partnership for Economic Policy, and a member of the DFID-CDC Evaluation & Learning Programme Steering Group
Marie holds a PhD in Economics from University College London, an MSc in Economics from London School of Economics and a graduate degree in Political Science, Arabic and Economics from University of Oslo, Norway.

Sayak is a Research Associate at 3ie.  Sayak supports various research transparency initiatives undertaken by the evaluation office. He

contributes to in-house research projects, including push-button replications. He is also responsible for managing data transparency activities for various grant programs. Sayak holds a Masters in Economics from Portland State University.     


Evidence for Action in New Settings: The Importance of Middle-Level Theory

Date :  Wednesday 5th June 2019

Location :  London

Speaker :  Nancy Cartwright

About the lecture

For predicting intervention outcomes in a new setting you need a context-local causal model of what is expected to happen there. A theory of change (ToC) for the intervention is a starting point. ToCs are ‘middle-level theories’: they aim for some, but not universal, general applicability. These are typically ‘arrows-and-variables’ models depicting what steps should occur in sequence but not the interactive factors necessary at each step, nor possible interrupters/defeaters.  For policy prediction these theories need context-local thickening. This requires an understanding of how each step produces the next, which in turn calls for middle-level theory of a different kind:  the middle-level principles (mechanisms) that govern that production. The context-local model allows better prediction of whether an intervention can work there, what it would take for it to do, what side effects might be and whether all this is affordable and acceptable in the context. The lecture can be viewed at this link and a copy of the slides can be downloaded by clicking on this link



About Nancy Cartwright

Nancy FAcSS is Professor of Philosophy at Durham University and a Distinguished Professor at the University of California, San Diego (UCSD). In the first half of her career at Stanford University she specialised in the philosophy of the natural sciences, especially physics; in the second half, at the London School of Economics and now Durham and UCSD, she has specialised in philosophy and methodology of the social sciences with special attention to economics. Her current research focusses on objectivity and evidence, especially for evidence-based policy.


Designing evaluations to inform action in new settings

Date :  Wednesday 22nd May

Location :  London

Speaker :  Calum Davey




About the lecture

This presentation will be based on a CEDIL inception report. The report drew on the perspectives of more than five academic disciplines — from epidemiology to philosophy — and reviewed a diverse range of literature on the task of ‘learning for elsewhere’, addressing the questions: what is learned in evaluations of complex interventions that is useful for future decision making, and how can this be improved? Suggested answers all involved theory, begging questions about which setting theories apply, and how to know quickly. The notion of context-centred interventions challenged the sentiment that learning ‘what works?’ or even ‘how does it work?’ helps when in fact approaches to knowing ‘why is outcome occurring?’ would be more useful.  A copy of the slides from this presentation can be found by clicking on this link


About Calum Davey

Calum is an Assistant Professor at the London School of Hygiene and Tropical Medicine. He has worked on evaluations in the health and education sectors. He has been involved with CEDIL from the start, and was the lead author on one of the pre-inception reports and one inception report. He currently splits his time between research on evaluation methods and the PENDA programme, a £7m DFID-funded research programme on disability-inclusive development. 

Learning and Adapting in Development Practice

Date :  Wednesday 15th May

Location :  London

Speaker :  Patrick Ward





About the lecture

The growth of the global evidence base has provided opportunities to accelerate development through the systematic sharing of evidence of ‘what works’. Context matters however, and the implementation of development programmes requires the ability to learn from and respond to successes and failures on a short time scale and in the face of limited data. This lecture will explore factors influencing the extent to which development programmes are able to adapt in the light of evidence and learning. It will draw from the practical experience of monitoring and evaluating development programmes and supporting government statistics across a range of sectors in developing countries. To watch the livestream, please click on this link To see the slides, please click on this link

About Patrick Ward

Patrick is CEDIL Programme Director and has overall responsibility for the delivery of the programme, working closely with the Research Director as a member of the CEDIL Directorate. Patrick Ward is Director of Oxford Policy Management’s Statistics, Evidence and Accountability programme. He has more than 20 years’ experience in leading work in the monitoring and evaluation of development for both donor-financed programmes and government systems, particularly in the social sectors.


The Need for Using Theory to Consider the Transferability of Interventions

Date :  Wednesday 8th May 2019

Location :  London

Speaker :  Professor Chris Bonell





About the lecture

This lecture explores ways that the transferability of interventions to new settings might be modelled statistically and the role of theory in considering the question of transfer. The lecture draws on preliminary results of the realist trial of the Learning Together whole-school health programme in the UK. Unfortunately, technical error has meant the first part of the lecture was not recorded, but the remaining can be found by clicking this link Slides from the lecture can be downloaded by clicking on this link





About Chris Bonell

Chris is Head of the Department of Public Health, Environments and Society, and Professor of Public Health Sociology at the London School of Hygiene and Tropical Medicine.  Prior to joining LSHTM, Chri was Professor of Sociology and Social Policy at University College London, and Professor of Sociology & Social Intervention at the University of Oxford.  His main areas of research are on adolescent health, sexual health, substance use and social exclusion and health, as well as in research methodology. Chris Bonell portrait

Using RCTS to Evaluate Social Interventions: Have We Got It Right?

Date :  Wednesday 27th March 2019

Location : London

Speaker :  Professor Charlotte Watts








About the lecture

Randomised controlled trials (RCTs) provide the gold standard method to obtain evidence of intervention impact. Historically, the approach was developed to assess the impact of clinical interventions. Increasingly however, RCTs – both individual and cluster – are being used to assess a broad range of behavioural and social interventions. Although some argue that using randomised designs are not appropriate for evaluating social and community based interventions, we disagree. Whilst there may be challenges (as there often are with clinical interventions), randomisation and the use of control populations should always be considered, as this gives the most robust measure of effect size. But this doesn’t mean that we have everything right. Drawing upon examples from intervention research on HIV, as part of the STRIVE research programme, and on violence against women, as part of the LSHTM Gender, Violence and Health Centre, the presentation will discuss whether it is appropriate to apply all of the standards and ‘rules’, without consideration of the potential implications for the feasibility, forms and applicability of evidence generated. To watch the livestream, please click on this link





About Charlotte Watts

Charlotte is Chief Scientific Adviser to the UK Department of International Development (DFID).  In this role she is Director of the Research and Evidence Division and Head of the Science and Engineering Profession for DFID. She is currently seconded to DFID from the London School of Hygiene and Tropical Medicine, where she is Professor of Social and Mathematical Epidemiology, and where she she founded the Social and Mathematical Epidemiology Group.






When context is the barrier: Evaluating programmes during political turmoil


Date :  Wednesday 6th March 2019

Location :  London

Speaker :  Dr Joanna Busza




About the lecture


The importance of “context” in evaluation is increasingly recognised, especially when considering how complex interventions might be scaled-up, adapted or replicated for new settings. Most process evaluation frameworks include documenting contextual characteristics deemed relevant to the intervention’s implementation, such as the structure and function of health systems, cultural norms and practices, and existing laws or policies. The MRC guidelines for process evaluations, for example, suggest using existing theory to identify a priori the factors likely to facilitate or hinder successful implementation of intervention components. This seminar will focus on challenges to design, implementation and evaluation of community-based health programmes when context – at its broadest level – changes in abrupt, unpredictable ways. I will share examples from research in Cambodia, Ethiopia and Zimbabwe, including both negative and positive consequences of dramatic political or policy changes, and discuss implications for completing and interpreting the affected studies.  Livestream can be accessedby following this link: Livestream


About Joanna Busza

Joanna is a social demographer with over 25 years’ experience in community-based public health research, with a focus on sexual, reproductive, and maternal health. Her expertise lies in qualitative and mixed methods studies.  Currently she is Director of the LSHTM Centre for Evaluation and her research primarily involves process evaluations of complex interventions in the fields of HIV prevention and treatment, safe labour migration, and adolescent sexual health in both Ethiopia and Zimbabwe.  



Evidence Standards and justifiable evidence claims


Date :  Wednesday 6th February 2019

Location :  London

Speaker :  Professor David Gough




About the lecture


In developing findings and conclusions from their studies, researchers are making ‘evidence claims’. We therefore need to consider what criteria are used to make and justify such claims. This presentation will consider the use of evidence standards to make evidence claims in relation to primary research, reviews of research (making statements about the nature of an evidence base), and guidance and recommendation informed by research. The aim is to go beyond testing the trustworthiness (quality appraisal) of individual studies to discuss the ways in which evidence standards are used to make evidence claims to inform decisions in policy, practice, and personal decision making. The live audio and slide show recoding can be found by following this link: Live recording of lecture. Please note that recording started early, so please fast forward to 17 minutes. The slides can be downloaded here: Slide show

About David Gough

David is Professor of Evidence Informed Policy and Practice and the Director of the EPPI-Centre in the Social Science Research Unit (SSRU) at UCL. His early research was on child welfare at the University of Glasgow and at Japan Women’s University. Since moving to SSRU in 1998, his work has focused on methods of systematic mapping and synthesis and  ‘research on research use’. Recent work include the Wellcome Trust funded ‘Science of Using Science’ and the ESRC funded ‘UK What Works Centres: Aims, Methods and Contexts’. He is lead editor of Introduction to Systematic Reviews (2nd Edition) and Systematic Reviews and Research, both Sage Publications. David Gough close up


Stakeholder Engagement for Development Impact Evaluation and Evidence Synthesis


Date :  Wednesday 23 January 2019

Location :  London

Speaker :  Professor Sandy Oliver




About the lecture


This lecture explores methods for engaging stakeholders in making decisions for international aid and social development in the presence and absence of relevant research. It draws on empirical evidence about engaging stakeholders in the generation and use of evidence, taking into account political analysis, social psychology and systems thinking. It finds that the suitability of methods for engagement depends largely on the confidence that can be placed in knowledge about the specific context, and knowledge from elsewhere that seems theoretically or statistically transferable. When decisions are about generating new knowledge, the suitability of methods for engagement depends largely on whether the purpose is to generate knowledge for a specific context or for more generalizable use and, at the outset, the confidence and consensus underpinning the key concepts of interest. The Lecture will be available to view live at the link below. Live Recording

About Sandy Oliver

Sandy is Professor of Public Policy at UCL Institute of Education. For thirty years her interests have focused on the interaction between researchers and people making decisions in their professional and personal lives, largely through the conduct of systematic reviews. She is a member of the Board of the Campbell Collaboration and Cochrane editor with their Consumers and Communication Review Group. Her recent contributions to research synthesis methods come from working with the UK Department for International Development and the Alliance for Health Policy and Systems Research at WHO to build capacity in systematic reviewing in developing countries.

Sandy Oliver

To boldly go where no evaluator has gone before: the CEDIL evaluation agenda


Date :  Wednesday 12th December 2018   

Location: London

Speaker :  Dr Edoardo Masset



About the lecture


In this lecture I will introduce the newly established Centre of Excellence on Development Impact and Learning (CEDIL). CEDIL was established by the UK Department for International Development to develop new evaluation methods and to commission evaluation and synthesis studies in neglected areas of international development. Over its inception phase CEDIL identified key methodological evaluation challenges to address and priority thematic areas. The talk will illustrate CEDIL’s ambitious evaluation agenda over the next 5 years, and will be followed by Q&A and discussion. The Lecture will be available to view live at the link below. Live Recording

About Edoardo Masset

Before joining CEDIL, Edoardo was Deputy Director and head of the London office of the International Initiative for Impact Evaluation, which he joined after working for seven years as a Research Fellow at the Institute of Development Studies at the University of Sussex. Edoardo is an agricultural and development economist with extensive experience conducting impact evaluations, researching development interventions and consulting for a variety of institutions, including the World Bank. Edoardo also has experience of research synthesis work through a number of systematic reviews on a wide range of topics such as nutrition, health insurance and cost-effectiveness. His core research interests include rural development, child nutrition, poverty and inequality, and the analysis of household surveys. Edoardo Massett Deputy Director CEDIL

Intervention to foster early childhood development evaluation, sustainability and scalability


Date :  Wednesday 28th November 2018

Location: London

Speaker :  Professor Orazio Attanasio



About the lecture


Early Childhood Interventions have recently received much attention.  The consensus is that ECD interventions can ‘work’ and be very effective and important.  The new challenges however are: (i) understand how interventions work and how they obtain the observed effects, through which channels, at what age, and so on, and (ii) how to scale up effective interventions. The answer to the second question is related to the answer of the first.  Orazio will present some concrete examples of these issues in this lecture. Watch the recording here

About Orazio Attanasio

Orazio is the Research Director of IFS and one of the Directors of the ESRC Centre for the Microeconomic Analysis of Public Policy (CPP) and co-directs the Centre for the Evaluation of Development Policies (EDePo).  Orazio is a Professor at UCL, and a Research Fellow at the Centre for Economic and Policy Research.  In 2001 he was elected Fellow of the Econometric Society and in 2004 he was elected as a Fellow of the British Academy and is currently President of the European Economic Association.

Uncertainty and its consequences in social policy evaluation and evidence-based decision making


Date :  Wednesday 31st October 2018

Location :  London 

Speaker :  Doctor Matthew Jukes  



About the lecture


The methodologies of RCTs and systematic reviews imply a high level of rigor in evidence-based decision-making. When these standards are not met, how should decision-makers act? When a clear body of evidence is not available, there is a risk that action is delayed or that action is taken without optimal use of the existing evidence. This paper addresses the following question: what level of certainty is required for which kinds of decisions? We argue that decisions should be based on considerations of both the uncertainty and the consequences of all possible outcomes. We present a framework for making decisions on partial evidence that has implications for the generation of evidence too. More systematic analysis of uncertainty and its consequences can improve approaches to decision-making and to the generation of evidence. Watch the lecture here  Download the lecture slides here LIDC podcast on Innovative approaches to evaluation and evidence synthesis Please also see his blog post on this topic by following the link here

About Matthew Jukes

Matthew is a Fellow and Senior Education Evaluation Specialist at RTI International.  He has two decades of academic and professional experience in evaluating education projects and is contributing to projects in Malawi and Tanzania aimed at improving the quality of pre-primary and primary education in those countries.  He has also applied his research to work with the World Bank, UNAIDS, UNESCO, USAID and  Save the Children.

Using mid-level theory to understand behaviour change examples from health and evidence-based policy


Date :  Thursday 30th August 2018

Location :  New Delhi, India

Speaker :  Doctor Howard White



About the lecture

Mid-level (or mid-range) theory rests between a project-level theory of change and grand theory. The specification and testing of mid-level theories help support the generalizability and transferability of study findings. For example, in economics, the operation of the price mechanism to balance supply and demand is a grand theory. An agricultural fertilizer subsidy programme would have a project-level theory which partly draws on the theory of supply and demand (lowering price increases demand). A mid-level theory could be developed related to the use of price subsidies, of which the fertilizer programme would be a specific application. This talk will adopt the transtheoretical model of behaviour change to apply mid-level theory to the analysis of two sets of interventions: the adoption of health behaviour, and promoting evidence-based policy change. Watch the lecture here (currently in several parts) Download the lecture slides here

About Howard White

Dr White is the CEO of the Campbell Collaboration and the Research Director for CEDIL and this presentation is based on work undertaken as part of CEDIL.

Development impact attribution: Mental models and methods in 'mixed marriage' evaluations


Date :  Wednesday 18th July 2018

Location :  London

Speaker :  Professor James Copestake



About the lecture

The marriage metaphor will be used to explore collaboration that spans academic traditions and disciplines, researchers and managers, public and private sector agencies.  The idea of mental models will be used to explore the ontological, epistemological, contractual and socio-political tensions created by formalised evaluative practice.  It will focus particularly on experience with mixing qualitative impact evaluation with other approaches to generating evidence, and learning and legitimising public action.  It will draw on case studies from the garment industry, medical training, housing micro-finance and agriculture spanning three continents. Watch the recorded lecture here

About James Copestake

Prof Copestake is a professor of international development at the University of Bath.  In addition to recent work on the Qualitative Impact Protocol, his recent research has addressed contested perceptions of well-being in Peru, financial inclusion and micro-finance in India.  He has also researched the relationship between social policy and development studies, and the use of challenge funds in aid management. James Copestake portrait

Representing theories of change: Technical challenges and evaluation consequences


Date :  Wednesday 30th May 2018

Location :  London

Speaker :  Doctor Rick Davies



About the lecture

This lecture will summarise the main points of a paper of the same name.  That paper looks at the technical issues associated with the representation of Theories of Change and the implications of design choices for the evaluability of those theories.  The focus is on the description of connections between events, rather than the events themselves because this is seen as a widespread design weakness.  Using examples and evidence from a range of internet sources, six structural problems are described along with their consequences for evaluation.  The paper then outlines six different ways of addressing these problems which could be used by programme designers and evaluators.  These solutions range from simple to follow advice on designing more adequate diagrams; to the use of specialist software for the manipulation of much more complex static and dynamic network models.  The paper concludes with some caution, speculating on why the design problems are so endemic but also pointing a way forward.  Three strands of work are identified that CEDIL and DfID could invest in to develop solutions identified in the paper. Watch the recorded lecture here

About Rick Davies

Dr Davies is an independent Monitoring and Evaluation consultant and is based in Cambridge, UK.  He has managed the MandE NEWS website and email lists since 1997.

Rick Davies portrait

The four waves of the evidence revolution: Progress and challenges in evidence-based policy and practice


Date :  Wednesday 11th April 2018

Location :  London

Speaker :  Doctor Howard White




About the lecture

The evidence movement has rolled out in four waves since the 1990s : the results agenda, the rise of RCTs, systematic reviews, and developing an evidence architecture.  This revolution is uneven across sectors and countries and is an unfinished revolution.  Drawing on experiences from around the world, this talk will provide a historical overview of the evidence movement and the challenges it faces.  Response from these challenges will be considered, including those offered by the work of CEDIL. Watch the recorded lecture here

About Howard White

Dr White is the CEO of the Campbell Collaboration and the Research Director for CEDIL and this presentation is based on work undertaken as part of CEDIL.