Hi, I’m Meg Hargreaves, a Senior Fellow at NORC at the University of Chicago, a member of the Systems in Evaluation TIG (SETIG), and member of the work team that developed the 2018 Principles for Effective Use of Systems Thinking in Evaluation. The principles are an important resource in my work as an evaluator of large social change initiatives. I use them as a framework for ensuring that key systems and complexity concepts are applied consistently to all aspects of evaluation design and implementation. Today’s blog focuses on the use of these principles in evaluation design.
Evaluation design refers to the up-front work that happens before data are collected. Better Evaluation’s Rainbow Framework outlines these activities in detail. They include working with the evaluation’s primary intended users to establish a decision-making process, writing the evaluation plan, and helping users understand and use the project’s findings. Design is also about defining what project or initiative is to be evaluated and creating its logic model or theory of change. Finally, design involves setting the evaluation’s parameters, clarifying its purpose, refining its research questions, and identifying the data sources, methods, and measures for collecting and analyzing the data about the initiative and the conditions or context in which it was implemented.
Throughout the design process, I use the SETIG principles as a systems lens that helps me pay attention to differences and similarities among stakeholder groups; notice interdependencies between project elements; and use different vantage points to see the entirety of the initiative, including its evaluation. Unlike traditional program evaluation designs, systems-informed evaluation designs do not privilege individual change over other kinds of change. Rather, they document the interplay of individual, organizational, and collective actions, and they look for the limits and potential unsustainability of single, isolated programs, policies, and messages. When initiatives have both individual and systemic impacts, it is better not to choose between the two types of designs, but to integrate them into one interdisciplinary design.
Cool Trick: I recently used the SETIG principles to design three interdisciplinary evaluations. One is an evaluation of a scholarship program that goes beyond financing individual educations to teaching young scholars how to lead social change in their communities. The second is an evaluation of a county government that is working with new partners to change their area’s opportunity ecosystem while also helping families transition to financial stability. The third is an evaluation that links a cohort outcome study of a national learning collaborative to a site impact study that assesses the effectiveness of the new programs developed by cohort sites participating in the learning collaborative. These integrated designs document changes, especially in capacity, at individual and organizational levels, while assessing how those changes contribute to changes in longer-term shifts in systemic patterns.
Hot tip: Use the SETIG principles to integrate program and social change evaluation designs where appropriate. Recognize that individual impacts contribute to much larger and longer trajectories of transformational change.
The American Evaluation Association is celebrating this week with our colleagues in the Systems in Evaluation Topical Interest Group. The contributions all this week to aea365 come from SETIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.