Authors: Attanasio, O., Blair, D., Krutikova, S.
Estimating the impact of an intervention on certain outcomes of interest is not sufficient for the
effective and efficient design of policies. Policy makers need to understand the mechanisms behind
the impacts of a given intervention and the results of an evaluation. This paper will discuss ways in
which structural modelling techniques can address the key issues of using and extrapolating the
results of policy evaluations. These issues are relevant for the treatment effects approaches to policy
evaluation used in many disciplines, including biostatistics, epidemiology, educational research and
(more recently) economics. One aim of the paper is make this material accessible to policy makers
and practitioners: the main ideas will be exposed in a simple way, using easy example. We want to
dispel the idea that structural models are excessively technical and academic and make it clear when
the use of structural models is particularly useful.
The promise of structural modelling has been recognised for a long time: evaluation of policies in the
context of models that make explicit assumptions about preferences, technology, available
information, constraints, rules of interaction etc, provide the necessary framework to enable: (1)
theory driven interpretation of empirical evidence including vitally gaining insight into mechanisms
for observed impacts; (2) forecasting effects of modifications to existing policies or implementation
of existing policies in new contexts; (3) combining evidence from multiple studies.
Historically, however, progress in the use of structural models for policy evaluation has been
hindered by the imposition of what has been viewed by many as implausibly strong identifying
assumptions. The rise in the popularity of treatment effects approaches in social sciences which take
the randomized trial as the gold-standard can be seen as a response to this issue. Approaches based
on treatment effects estimated by RCTs or natural experiments are appealing as they require much
fewer assumptions relating to exogeneity, functional forms, exclusion and distributional properties
than structural methods. However, they can only answer a narrow sub-set of the questions that are
relevant for policy design and that is theoretically possible to answer using structural methods.
Firstly, many important policies and interventions are not amenable to evaluation using natural
experiments or conducting a randomized trial. Secondly even among those that are, scope for
interpretation of the estimated effects, as well as their external validity and contribution to existing
knowledge are very limited.
A promising direction for methodological innovation and future research in the field of policy
evaluation is, therefore, combining the strengths of structural modelling and treatment effects
approaches. Over recent years a number of seminal studies have demonstrated the potential of this
approach to broaden the range of questions that can be answered within policy evaluation while
maintaining the high standards for credible identification instituted by the treatment effects
This is the focus of our paper. We start by discussing how randomized trials can be used to validate
the assumptions that are necessary in the estimation of structural models. Here we discuss basic
approaches and specific examples of ways in which estimates from randomised trials can be
compared to those from structural models to validate the key assumptions in these models. The
validated models can then be used to predict responses to policy modifications, extrapolate findings
to different situations, predict long-run effects and estimate impacts of policies not amenable to
randomization (e.g. infrastructure projects).
We then turn to the question of how structural models can be used to make finding from
randomized trials more informative. For example, Attanasio et al. (2015) estimate a structural model
of skill production to find that the positive impact of an early childhood stimulation programme,
trialled using a randomized design, is explained by increases in the level of parental investment
rather than the way in which skills are produced.
Finally, we discuss the challenge of tailoring the specific structural parameters being estimated to
the policy questions of interest and opportunities for credible identification. Most policy questions
do not require estimation of a fully specified model; the key to effectively combining treatment
effects and structural methods in policy evaluation is striking the right balance between complexity
and robustness of underlying assumptions to answer a wider set of questions than either method
can alone while maintaining a high standard for what assumptions are considered credible.
The final paper will be available on the website end of September 2018