Hello! I am Brandon Coffee-Borden and I served as Program Co-Chair of the Systems in Evaluation Topical Interest Group (SETIG) during 2015 to 2018. This blog post will describe how I have used the Principles for Effective Use of Systems Thinking in Evaluation to support my practice around data interpretation and analysis in a systems-informed evaluation.
I used the principles, in part, to guide the development of an analytic approach for a systems-informed evaluation of a U.S.-based philanthropic initiative. The initiative was designed to reshape how key actors and institutions within the system perceived the problem of interest, increase the priority and attention given to the problem of interest, develop stronger networks among key players within the system, and encourage key players and institutions to take on a greater role in addressing the problem of interest.
The system at the center of the evaluation exhibited many characteristics of a complex system, characterized by entangled webs of relationships and activities. The principles were a critical reminder that such complexity often generates unpredicted outcomes that emerge through the interactions of the many parts of the system or actors within and across levels of the system.
As our team began to analyze our qualitative data from interviews and key documents to explore outcomes that the initiative contributed to, we were attentive to how outcomes manifested from the work of individual grantees and partners as well as cohorts of grantees and partners. We examined how outcomes were connected to and reinforced each other (or not) as well as processes of change (for example, how outputs led to one or more outcomes). We also interrogated how, if at all, outcomes combined to create broader and deeper changes in the system.
As we moved through this analysis process, we remained sensitive to the types of dynamic complexity that might be present in the system at the center of the evaluation such as: change at different timescales, feedback loops, nonlinearity, and historical dependence.
This approach to analysis helped our team obtain a deeper and more nuanced understanding of how the initiative contributed to changes in system conditions and behavior. It also enhanced our ability to provide useful recommendations for similar philanthropic efforts in the future.
Hot Tip: The principles provide practical guidance for evaluators and evaluation stakeholders for designing and implementing evaluations with a systems lens. Nonetheless, they are not a replacement for a project- and context-specific analytic framework and methodological approach to data analysis and interpretation.
Rad Resource: The book, Systems Concepts in Evaluation, provides a good introduction to a variety of systems concepts and methodologies within an evaluation context. It’s available on Amazon: Williams, B., & Imam, I. (2007). Systems concepts in evaluation: An expert anthology. Point Reyes, CA: EdgePress of Inverness.
The American Evaluation Association is celebrating this week with our colleagues in the Systems in Evaluation Topical Interest Group. The contributions all this week to aea365 come from SETIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hi Brandon,
Thank you for providing this fantastic and insightful example of using the Systems in Evaluation Principles for your data analysis! I think all too often, data collected doesn’t truly tell the narrative it should be telling, and opening up interpretation to look at larger impacts and interconnections is an essential way to ensure the evaluation is used to improve programming.
I am particularly drawn to your statement, “[w]e examined how outcomes were connected to and reinforced each other (or not) as well as processes of change (for example, how outputs led to one or more outcomes).” I feel compelled to ask: were the outcomes seen as discrete at the beginning of the evaluation process and only upon completion of the analysis recognized as being interconnected and impacting each other? Or were you able to predict interconnected outcomes because you applied a systems lens to the evaluation design?
Second, while, as you mention, a systems approach cannot replace a “project- and context-specific analytic framework and methodological approach,” is there any situation in which applying both a big-picture systems lens and a targeted context lens would be detrimental to the evaluation?