Impact evaluation is intended to address the question of effectiveness, i.e. what difference the programme made to the outcomes of interest. This analysis is usually done using a large n statistical design, such as randomized controlled trials.
There is growing recognition that the use of mixed methods in impact evaluations can add value in various ways, allowing the study to address additional questions about programme design and implementation and thus why programmes work (or not) and how to make them work better. Mixed methods can also help frame the study, uncovering unintended outcomes or contextually-relevant ways of framing questions on social constructs. They can also help understand study findings, for example why people have not taken part in an intervention.
The idea of using mixed methods has a long tradition in social research. But it is also recognized that mixed methods are often poorly applied. In quantitative analysis, the qualitative component, if any, is often poorly designed, integrated or reported.