My name is Kylie Hutchinson. I am an independent evaluation consultant and trainer with Community Solutions Planning & Evaluation. In addition to my usual evaluation contracts, I deliver regular webinars on evaluation topics, Twitter weekly at @EvaluationMaven, and co-host the monthly evaluation podcast, Adventures in Evaluation along with my colleague @JamesWCoyle.
Logic models (logical frameworks, logframes, whatever you prefer to call them) have been a staple of the evaluation field since 1990. However, the more I embrace complexity and systems thinking, the more I wonder about their utility for evaluating programs. Life is messy, and we all know that programs rarely unfold over time as they were originally intended. Logic models are static, simple, and predictable, while real life is dynamic, complicated, and somewhat unpredictable. So what’s a conscientious evaluator to do?
Hot Tip: Don’t throw the baby out with the bath water.
While I’m not the logic model zealot I was ten years ago, I’m not exactly ready to abandon them either. I think logic models can still be useful depending on the particular context of the evaluation. I can think of many previous evaluations I’ve conducted where the process of collaboratively developing a logic model with program stakeholders was critical to building a shared understanding of the program and support for the evaluation. I think the problems begin when we become so tied to it (“But it’s what we sent the funder!”) that we’re afraid to let the program adapt or evolve as necessary over time, or we develop blinders to all the other factors in the system that may influence the program’s outcomes.
Why on earth can’t we change a logic model mid-stream? Those of you who are familiar with Developmental Evaluation recognize this tune very well. But consider this. Even Michael Quinn Patton himself has said that Developmental Evaluation is not appropriate for every evaluation context. For certain evaluations, program fidelity is critical. So I think it’s a balancing act. I also predict that we’ll begin to see the logic model evolve into something slightly different over time that better reflects the complex world that programs operate in.
Rad Resource: The Logic Model Rubric. If you’re new to logic models, I’ve developed a simple rubric for writing logic models that can help you get started.
Rad Resource: The Little Logic Model Webinar. I also regularly offer a short and affordable webinar that introduces new evaluators and program staff to logic models in terms of their development, use, and most importantly, their limitations.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Pingback: Logic Models Week: Kylie Hutchinson on Logic Models in the Age of Systems Thinking · AEA365
Kylie –
You touch on a lot of great points here. I agree: don’t throw out the baby with the bathwater. I’m also excited about the idea of blending systems thinking and complexity with logic modelling techniques. To that end, I would propose that embracing complexity and systems thinking should encourage one to evolve the “traditional” logic model into a tool that evaluators can use to strengthen program design and anticipate unintended consequences of system behavior by incorporating ideas like emergence and feedback behavior.
So I don’t think logic models are passe – but I do think they need to evolve.
Thanks for a thought provoking post.
Yes, I totally agree, and can’t wait to see what evolves in the future.
Thanks Ann! Believe it or not, it was actually part of a school assignment I had to do years ago for a course in Instructional Design.
Kylie – Love your logic model rubric. Thanks for sharing.