When will we ever learn to learn from what we already know? Harnessing the power of the evidence revolution

When will we ever learn to learn from what we already know? Harnessing the power of the evidence revolution

Howard White | 15th March, 2022

What can we learn from what worked – and what didn’t work – in evidence use in the pandemic? Howard White shares his reflections in advance of next week’s CEDIL conference: Strengthening evidence use during the pandemic and beyond

One of the CEDIL consortium members, 3ie, was set up in response to the Centre for Global Development’s report When Will We Ever Learn? Closing the Evaluation Gap. That report argued that billions of dollars of aid money were spent each year on projects of unknown effectiveness because of weak evaluation practice. The lack of evidence means money is wasted on programmes which don’t work.

The creation of 3ie was part of what I have called the second wave of the evidence revolution, along with Abdul Latif Jameel Poverty Action Lab (J-PAL), Innovations for Poverty Action (IPA), Center for Effective Global Action (CEGA), Development Impact Evaluation (DIME), the Strategic Impact Evaluation Fund (SIEF). More recently they have been joined by developing country organizations such as BRAC Institute of Governance and Development (BIGD) and International Center for Evaluation and Development (ICED). Together, these groups and others are powering a huge rise in the number of impact evaluations, especially randomized controlled trials (RCTs).

The increase in rigorous impact evaluations of development programmes has been matched in other sectors in developed countries: education, social welfare, crime and justice and so on have all seen similarly large rises in impact evaluations in the last 15-20 years. We should have learned a lot by now. But doing studies is not the same as learning.

The upcoming CEDIL conference has the theme of ‘strengthening evidence use during the pandemic and beyond’. So how good was the use of evidence in the pandemic? And, in particular, what use was made of evidence about what we already know, especially as summarized in systematic reviews?

Many research organizations quickly put together COVID collections which marshalled existing resources of relevance to decisions to be made under the pandemic. There are even collections of collections, like COVID-END. My own organization, the Campbell Collaboration, made a special issue of 50 relevant systematic reviews, and joined forces with Evidence Aid to make our reviews of relevance available in their COVID collection. We also published a series of blogs of lessons to be learned from those reviews.

Some reviews had lessons of obvious and immediate relevance, such as the review on interventions to promote handwashing, the use of cash transfers in emergency settings, and e-learning for health care workers. As the blog on economic responses argued, governments could go beyond cash handouts to consider ways to support youth employment.

Other reviews addressed social policy adaptation approaches.

For example, my own blog presented the evidence for restructuring education with smaller class sizes, shift teaching, later start times for adolescents (which really just needs to be done anyway), and year round learning as a way of keeping children in school with social distancing. The additional staff would come from the expansion of graduate volunteer programmes which are already common and provide at least as good learning outcomes as regular trained teachers. Instead, schools around the world were closed. In some countries, such as Uganda, no child attended a school for two years.

In short, there was evidence a plenty to inform policy responses to the pandemic.

But, whilst governments said they were ‘following the evidence’, they were only doing so to a limited extent.  The lockdowns implemented in countries around the world focused on the single metric of COVD cases or hospitalizations. Less attention was being paid to socio-economic impacts and the evidence base for appropriate policy responses to those issues. Why was that?

Trish Greenhalgh’s wonderful book How to Read a Paper, points out that a doctor doesn’t want to know ‘does this treatment work in general?’. They want to know ‘will it work for my patient?’. Similarly, decision-makers want evidence for their country, their state, their district, their town or city, their neighbourhood, or even their street. And they want evidence from now, not last year.

But they are wrong about this. Evidence is transferable. If I hold an apple in the air and let it go it will fall to the ground. That is as true in Bamako as in Birmingham, and as true in Chittagong as in Chicago.  I use the apple example deliberately, as the seventeenth century physicist Issac Newton had a very simple approach to transferability. He argued that if something works in one place, then we should assume it will work everywhere.  And he set about developing laws of physics accordingly. Understanding the causal process of why Newton’s apple fell applies equally to any object dropped from a height.

The CEDIL programme of work on using mid-range theory to assess transferability takes a somewhat similar approach. A given programme works in this setting. By understanding the underlying causal processes of how the programme works we can make a reasoned assessment of whether it will work elsewhere, or what adaptations are needed for it to do so in a different context. COVID wasn’t our first pandemic…. HIV/AIDS, SARS and so on had all been more or less successfully tackled. And, more generally, as documented above, there was a wealth of other COVID-relevant evidence being marshalled, especially in systematic reviews. Systematic reviews are the third wave of the evidence revolution. They are making a steady advance, but their day is really yet to come. Their day will have come when a decision-maker, faced with a decision, asks – as they do in Nordic countries – what do the systematic reviews say?

The failure to use this wealth of available evidence must be one of the lessons we learn in COVID retrospectives. Promoting the use of evidence – the fourth wave of the evidence revolution – is itself an area of research, including a CEDIL programme of work.  The CEDIL conference will be addressing the issues raised in this blog: what have we learned from the COVID pandemic about generating and using evidence, how machine learning may have helped yield timely results, and a focus on the response in education. You can register for free for the CEDIL conference on 22-25th March 2022 with this link.

Howard White is the Research Director of CEDIL, Head of the Campbell Collaboration Research Programme and Director of Evaluation and Evidence Synthesis at the Global Development Network

Image credit: RTI International

Leave a Reply

Your email address will not be published. Required fields are marked *