Hi! We are Shelly Engelman, Kristin Patterson, Brandon Campitelli, and Keely Finkelstein of the Texas Institute for Discovery Education in Science (TIDES) at the University of Texas, Austin. The mission of TIDES is to promote, support, and assess innovative, evidence-based undergraduate science education. A large part of our work entails working with STEM faculty to evaluate the efficacy and impact of education programs on students.
At the beginning of every project, our tendency as evaluators is to generate a logic model to visually represent how a program is intended to work and bring about change. Recently, however, faculty and staff’s reactions to logic models forced us to change directions and create a new, more useful and palatable format.
Here are a few quotes from faculty highlighting some of the impediments to using logic models with STEM faculty:
This [logic model] is really hard to digest and full of jargon. What’s the difference between an output and an outcome?
Wow…this is over my head. I’d like to see this summarized in a table format.
Cool…but, I don’t know how useful this will be. It would helpful if I saw a timeline with clearly delineated roles and responsibilities.
How does this logic model relate to the evaluation plan? I’d like to see the logic model and evaluation plan on one page…in one figure.
Where are my program’s goals in this model? Can we emphasize them more?
Using a ‘who, what, when, why, and how’ approach, we re-designed our logic model to use terms, concepts, and a format that is more familiar to STEM faculty and staff. This approach not only captures the program essentials, but also assigns roles and responsibilities to staff members, establishes a timeline of events, integrates the evaluation component, and emphasizes the program’s goals. Instead of nebulous concepts framing the logic model (e.g., outputs), our re-designed approach is organized by the following questions; note how they are aligned to components of a logic model/evaluation plan:
Questions aligned to…
Why (are we doing this?) Impact/Outcomes
What (are we doing?) Activities
Who (is responsible?) Inputs
When (will it be done?) Timeline
How (will we evaluate Evaluation methodology
it to know if it was done effectively?)
Cool Tricks: Use a table format to rearticulate a classic logic model
Guided by these questions, it is fairly straightforward to rearticulate a logic model into a table format. We found that non-evaluators gravitate to a table because they can clearly see the alignment between their program’s goals, activities, timeline, and the evaluation methodology. Here is an example below:
Contribute Your Own Best Practices
We appreciate that the evaluation community has more to learn about effectively communicating logic models to non-evaluators. Previous AEA 365 blog posts by Corey Smith and Matt Keene suggest that there is a need to explore alternative approaches to logic models. We invite you to share your best practices. For those interested, we could put together a panel presentation at a future AEA conference!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.