How biased are observational methods in practice? Accumulating evidence using RCTs with imperfect compliance

Programme of work

Increasing evidence use

Principal investigator(s)

Roland Rathelot

Host institution

University of Warwick

Other institutions

London School of Economics
Toulouse School of Economics
Institute for International Economic Studies, University of Stock-holm

Dates

January 2020 to June 2021 (TBC)

Project type

Evidence synthesis

Country/ies

Low-income, fragile and conflict-affected states

Research question

In commissioning impact evaluations, experimental methods, such as randomised controlled trials (RCTs), that are often preferred as observational methods are considered to be at risk of biases of unknown size and direction. This project will undertake a large-scale standardised approach to assessing the performance of observational methods to better understand the size and direction of biases, and how bias depends on measurable characteristics of programmes and settings.

Research design

The team will undertake data analysis and synthesis before developing online tools. Data collection and cleaning will take place, followed by developing and testing empirical tools, then data analysis. This will be followed by the development of an online platform for researchers.

Data source

Electronic databases and websites.

Policy relevance

This project is relevant to both practitioners who use observational impact evaluation methods and policymakers who care about the impact of their policies. It will help to inform practitioners what methodologies they should use for evaluating a particular programme and how best to implement that method. It will help policymakers better understand the strengths and weaknesses of different methods used in existing and new evaluations and therefore enable them to make better decisions. Given the large-scale and standardised approach of this project, it will be beneficial for policymakers and evaluators in all countries and many different topics.