Mixed-Method Evaluations Lend Rigor to Design

By Heidi Reynolds, PhD. A blog post from the 2016 American Evaluation Association (AEA) annual meeting discusses mixed-method evaluations.

Heidi Reynolds presenting at Evaluation 2016.
Heidi Reynolds presenting at Evaluation 2016.
by Heidi Reynolds, PhD, director of evaluation, MEASURE Evaluation

Mixed methods is an evaluation approach that combines multiple data sources and types of data. While the data are not usually literally mixed (as in the same database), the analysis and interpretation is coordinated, resulting in a whole that is greater than the parts. People not as familiar with the approach may think of mixed methods as adding a qualitative component to an experimental or quasi-experimental pre- and post- quantitative study design. However, sessions I attended at the American Evaluation Association (AEA) annual meeting held this week in Atlanta show that it is much more than that.

Mixed methods can increase the rigor of evaluation designs of interventions that operate in complex contexts where, for example, identifying comparison groups is challenging, where vulnerable populations are hidden from household surveys, and where interventions, such as those intended to strengthen health information systems, have not been the topic of rigorous evaluation. 

Employing multiple methods can fill in gaps in understanding that a single method may miss. For example, quantitative surveys with multiple respondents and careful sampling can measure certain indicators with precision. Analytic methods can construct a counterfactual where one was not possible in design. Theory-based methods use program design to help understand the program characteristics that affect outcomes. Case-based approaches dive deep to extract learning from a limited number of cases. Participatory methods draw from program participants, implementers, people, and groups who may use the results, among others.

Evaluators’ awareness and experience with these methods is increasing as more mixed-method approaches are used, with much credit due to the advocates of systems thinking and complexity awareness in our work. These systems thinking approaches include “complex-aware” methods, applications and experiences, such as the “most significant change” method, which was presented in two sessions organized by MEASURE Evaluation—and Jessica Fehringer in particular. Acceptance and use of complex-aware methods offers more tools to evaluate complex interventions with long causal pathways, multiple program components, non-linear program effects, and changes in the program. It also enables us to measure unintended consequences, unexpected findings, or contextual factors.

The theme of the 2016 AEA was “Evaluation + Design.” Thus, the notion of evaluation informing program design and design informing evaluation approaches was widely discussed. Many mixed-method approaches are consistent with that. They involve more discussion between the evaluator, program designers, and users of the results over the course of evaluation and program design, implementation, and results dissemination and interpretation. The practice of evaluation has moved away from the practice of the evaluator being external to the program design and implementation. While evaluators will always want to maintain objectivity when designing, implementing, and interpreting, this no longer means literally independent from program.  

Filed under: Evaluation
MailLinkedInTwitterFacebook
share this