AEA365 | A Tip-a-Day by and for Evaluators

Jan/17

30

Kicking Outputs out of Logic Models by Kirk Knestis

Kirk Knestis at Hezel Associates back with the promised “additional post” about logic models, this time challenging the orthodoxy of including “outputs” in such representations. Irrespective of the style of model used to illustrate the theory behind a program or other innovation, it’s been my experience that inclusion of “outputs” can create confusion, often decreasing the utility of a model as evaluation planning discussions turn to defining measures of its elements (activities and outcomes).  A common, worst-case example of this is when a program manager struggles to define the output of a given activity as anything beyond “it’s done.” I’m to the point where I simply omit outputs as standard practice if facilitating logic model development, and ignore them if they are included in a model I inherit. I propose that if you encounter similar difficulties, you might do the same.

Lesson Learned – W.K. Kellogg explained in the foundational documentation often referenced on this subject that outputs “are usually described in terms of the size and/or scope of the services and products delivered or produced by the program.” As “service delivery/implementation targets,” outputs are the completion of activities or the “stuff” those efforts produce. It’s generally understood that outputs can be measures of QUANTITIES of delivery (e.g., number of clients served, hours of programming completed, units of support provided). Less obvious, perhaps, is the idea that we should examine the QUALITIES of those activities. Even more neglected, however, is an understanding that the stuff produced can be usefully viewed as a source of measures of the qualities of the activities that generated it. In short, outputs are more data sources than parts of an evaluand’s theory of action.

Hot Tip – Instead of including outputs as a separate column in tabular or pathway-style models, hold off considering them until planning gets to defining how quantities and qualities of delivery will be measured for “implementation” evaluation purposes. Making the distinction here between that and measures of outcomes assessing “impact” of activities, this approach layers a “process-product” orientation on implementation evaluation, looking at both quantities and qualities with which activities of interest are completed. This simplifies thinking by avoiding entanglements in seemingly redundant measures among activities and their outputs, and can encourage deeper consideration of implementation quality; harder to measure so easier to ignore. It also takes outputs out of the theoretical-relationships-among-variables picture; an important issue evaluations testing or building theory.

Hot Tip – Work with program/innovation designers to determine attributes of quality for BOTH activities (processes) and the stuff in which they result (products). Develop and use rubrics or checklists to assess both, ideally baked into the work itself in authentic ways (e.g., internal quality-assurance checks or formative feedback loops).

Hot Tip – Another useful trick is to consider “timeliness” as a third aspect of implementation, along with quantity and quality. Compare timelines of “delivery as planned” and “delivery as implemented” measuring time slippage between the ideal and the real, and documenting the causes of such slippage.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

2 comments

  • Agnes Kucharska · March 13, 2017 at 1:10 pm

    Hello Kirk,

    I thoroughly enjoyed reading both of your post on the AEA365 blog, “A Pathway to Logic Modeling Freedom” and “Kicking Outputs out of Logic Models.”

    I have been learning about logic models in my Program Evaluation masters course at Queen’s University. Understanding and creating logic models was not an easy task, what was more difficult was to explain a logic model I designed to stakeholders. Your hot tip to free ourselves from labels is an important one to remember when speaking with stakeholders. The headings are often meaningless, but defining each element using context from their program clarifies the purpose of the logic model.

    I found your additional blog on removing outputs from the logic model very intriguing. Since I am new at designing logic models, I agree that my initial understanding of outputs was the measurement of quantities rather than qualities. I will have to keep your “process-product” approach, and your tip to consider “timeliness,” in mind next time I create a logic model.

    Thank you for sharing W.K. Kellogg Foundation’s Logic Model Development Guide, it’s useful for those of us who are beginners.

    Agnes Kucharska

    Reply

  • Roxana Salehi · February 1, 2017 at 11:13 am

    Kirk, I completely agree. I don’t recommend using outputs in logic models as I find they ‘breaks the logic’ of the logic model. They should be addressed, but not in the logic model. Thanks for the post.

    Reply

Leave a Reply

<<

>>

Archives

To top