Research Project Paper 9
David Rhys Bernard, Gharad Bryan, Sylvain Chabé-Ferret, Jon de Quidt, Jasmin Claire Fliegner, Roland Rathelot
Consider a policy maker choosing between programs of unknown impact. She can inform her decision using observational methods, or by running a randomised controlled trial (RCT). The proponents of RCTs would argue that observational approaches suffer from bias of an unknown size and direction, and so are uninformative. Our study treats this as an empirical claim that can
be studied. By doing so we hope to increase the value of observational data and studies, as well as better inform the choice to undertake RCTs. We propose a large-scale, standardised, hands-off approach to assessing the performance of observational methods. First, we collect and categorise data from a large number of RCTs in the past 20 years. Second, we implement new methods to understand the size and direction of expected bias in observational studies, and how bias depends on measurable characteristics of programmes and settings. We find that the difference between observational estimators and the experimental benchmark is on average zero, but the resulting observational bias distribution has high variance.
Suggested citation: Bernard, D. R., Bryan, G., Chabé-Ferret, S., de Quidt, J., Fliegner, J. C., Rathelot. R. 2023. How Biased are Observational Methods in Practice? Accumulating Evidence Using Randomised Controlled Trials with Imperfect Compliance, CEDIL Research Project Paper 9. Centre of Excellence for Development Impact and Learning (CEDIL), London and Oxford. Available from: https://doi.org/10.51744/CRPP9