Kirk Knestis at Hezel Associates back with the promised “additional post” about logic models, this time challenging the orthodoxy of including “outputs” in such representations. Irrespective of the style of model used to illustrate the theory behind a program or other innovation, it’s been my experience that inclusion of “outputs” can create confusion, often decreasing the utility of a model as evaluation planning discussions turn to defining measures of its elements (activities and outcomes). A common, worst-case example of this is when a program manager struggles to define the output of a given activity as anything beyond “it’s done.” I’m to the point where I simply omit outputs as standard practice if facilitating logic model development, and ignore them if they are included in a model I inherit. I propose that if you encounter similar difficulties, you might do the same.
Lesson Learned – W.K. Kellogg explained in the foundational documentation often referenced on this subject that outputs “are usually described in terms of the size and/or scope of the services and products delivered or produced by the program.” As “service delivery/implementation targets,” outputs are the completion of activities or the “stuff” those efforts produce. It’s generally understood that outputs can be measures of QUANTITIES of delivery (e.g., number of clients served, hours of programming completed, units of support provided). Less obvious, perhaps, is the idea that we should examine the QUALITIES of those activities. Even more neglected, however, is an understanding that the stuff produced can be usefully viewed as a source of measures of the qualities of the activities that generated it. In short, outputs are more data sources than parts of an evaluand’s theory of action.
Hot Tip – Instead of including outputs as a separate column in tabular or pathway-style models, hold off considering them until planning gets to defining how quantities and qualities of delivery will be measured for “implementation” evaluation purposes. Making the distinction here between that and measures of outcomes assessing “impact” of activities, this approach layers a “process-product” orientation on implementation evaluation, looking at both quantities and qualities with which activities of interest are completed. This simplifies thinking by avoiding entanglements in seemingly redundant measures among activities and their outputs, and can encourage deeper consideration of implementation quality; harder to measure so easier to ignore. It also takes outputs out of the theoretical-relationships-among-variables picture; an important issue evaluations testing or building theory.
Hot Tip – Work with program/innovation designers to determine attributes of quality for BOTH activities (processes) and the stuff in which they result (products). Develop and use rubrics or checklists to assess both, ideally baked into the work itself in authentic ways (e.g., internal quality-assurance checks or formative feedback loops).
Hot Tip – Another useful trick is to consider “timeliness” as a third aspect of implementation, along with quantity and quality. Compare timelines of “delivery as planned” and “delivery as implemented” measuring time slippage between the ideal and the real, and documenting the causes of such slippage.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.