My name is Jonny Morell. I’m a senior policy analyst at the Vector Research Center for Enterprise Improvement, a division of TechTeam Government Solutions. I have been writing a lot about two topics – unexpected behavior in programs, and logic models. Recently these have been knitting together in my mind.
I’m beginning to appreciate how visual forms of logic models (e.g. flowchart, system, input/output columns of words) influence beliefs about both program theory, and choices concerning measures and methodologies. Choices about metrics and methodologies, in turn, affect beliefs about what a program will do and what evaluation can reveal. In their turn, expectations about programs and evaluations affect what unexpected phenomena lie in wait for evaluators.
To illustrate, imagine two logic models for the same program. One uses columns of words to list inputs, outputs, and the like, all at a very detailed level. The second is a flow chart view which chooses a grosser level of detail, but which is rife with feedback loops and relationships between the program and its environment. I’d bet that these two models would lead to very different articulations of program theory and very different methodologies and measures. I know the specifics of form and detail would trap me, and I am pretty sure they would seduce others as well. These form-content relationships are an inescapable derivative of using any kind of logic model.
Hot tip: Develop four logic models instead of just one. Make sure the models vary by form and content. It does not matter which visual forms you pick as long as they are different. As for content, vary the models by level of detail, or by richness of feedback loops, or both. Use each model to determine quantitative and qualitative measures and methodologies. Then cast a critical eye over all four possibilities and look for similarities and differences in program theory, and common and unique outcomes. Finally, it would not hurt to ask a question about each model: Does the level of detail reflect what we really know, or at least really believe, about how this program works.
Rad Resource: I have a workshop on logic models that touches on many of these issues. The slides are downloadable from my digital scrapbook at http://www.jamorell.com/. I also have an article that may be useful: Jonathan A. Morell Why Are There Unintended Consequences of Program Action, and What Are the Implications for Doing Evaluation?* American Journal of Evaluation, Vol. 26 No. 4, December 2005 444-463. Come summer, my book on the topic will also be available from Guilford: Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the inevitable.
*American Evaluation Association members – sign in to the AEA website using your AEA username and password and navigate to the journals. This article, as well as all archival content from the American Journal of Evaluation, is free to you as part of membership.
This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.