We are Mat Walton, Emily Gates, and Pablo Vidueira. Mat is the Technical Lead in the Social Systems Team at the Institute of Environmental Science and Research (ESR) in New Zealand. Emily is an assistant professor at Boston College who conducts evaluations of educational and health interventions and researches systems thinking, values, and equity in evaluation. Pablo is an evaluation consultant, researcher, and professor affiliated with the Global Alliance for the Future of Food, the Comillas Pontifical University, and the Technical University of Madrid.
We share a belief that systems thinking and complexity science can enhance and transform evaluation practice. We jointly edited an issue of New Directions for Evaluation that draws together diverse examples of using systems thinking and complexity science within evaluative practice. This week, a series of blog posts related to the journal issue is shared on AEA365.
What do we mean by systems thinking and complexity science? There are diverse approaches within the systems and complexity fields, such as systems science, cybernetics, operational research, complexity theory, and complexity science. Each approach provides a theoretical lens for understanding the world, and a set of methods. While some of the approaches within systems and complexity fields may be at odds with each other, commonality in principles have been identified and described, for example by the AEA Systems TIG. We believe that this mixture of commonality and diversity within systems and complexity fields provides a rich resource for evaluation practice.
Across ten examples, we saw that systems and complexity could be used within diverse evaluation settings, from establishing program theory and monitoring and evaluation frameworks, through supporting new collaborative organization structures, through evaluation capacity building, cross-cultural evaluation practice, design-driven evaluation practice and systems change efforts. The diversity of examples, we think, provides useful information for both those new and experienced in systems and complexity-informed evaluation.
Lessons Learned
Choosing the right systems or complexity approach for the evaluation task is important. Evaluators should be explicit in examining the boundaries around their own systemic practice. We note that it appears important to think about the right mix of expertise within the evaluation team. Roles include navigation across the systems and complexity fields to fit approach to evaluation focus, integrate subject expertise, and draw upon tools and insights across the evaluation field. We think there is value in using systems concepts and principles, but to also go beyond these into specific theory and methodology related to appropriate systems or complexity approaches.
We note that systems and complexity fields imply a place for evaluative thinking in program design, deciding what the focus of evaluation might be, and utilizing evaluative information. This is not unique to systems or complexity informed evaluation, yet is a recurring theme in the field.
Rad Resources
New Directions for Evaluation issue – Systems and Complexity Informed evaluation: Insights from practice (Issue 170)
Special issue of the journal Evaluation on Policy Evaluation for a complex world. Developed by the Centre for the Evaluation of Complexity Across the Nexus (CECAN) research group in the UK with several open access articles
The American Evaluation Association is hosting Systems- and Complexity-informed Evaluation Week. The contributions to AEA365 this week are all related to this theme. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.