AEA365 | A Tip-a-Day by and for Evaluators

TAG | outputs

Hi! I’m Tara Gregory, Director of the Center for Applied Research and Evaluation (CARE) at Wichita State University. Like any evaluator, the staff of CARE are frequently tasked with figuring out what difference programs are making for those they serve. So, we tend to be really focused on outcomes and see outputs as the relatively easy part of evaluating programs. However, a recent experience reminded me not to overlook the importance of outputs when designing and, especially, communicating about evaluations.

In this instance, my team and I had designed what we thought was a really great evaluation that covered all the bases in a particularly artful manner – and I’m only being partially facetious. We thought we’d done a great job. But the response from program staff was “I just don’t think you’re measuring anything.” It finally occurred to us that our focus on outcomes in describing the evaluation had left out a piece of the picture that was particularly relevant for this client – the outputs or accountability measures that indicated programs were actually doing something. It wasn’t that we didn’t identify or plan to collect outputs. We just didn’t highlight how they fit in the overall evaluation.

Lesson Learned: While the toughest part of an evaluation is often figuring out how to measure outcomes, clients still need to know that their efforts are worth something in terms of the stuff that’s easy to count (e.g., number of people served, number of referrals, number of resources distributed, etc.). Although just delivering a service doesn’t necessarily mean it was effective, it’s still important to document and communicate the products of their efforts. Funders typically require outputs for accountability and the programs place value in the tangible evidence of their work.

Cool Trick: In returning to the drawing board for a better way to communicate our evaluation plan, we created a graphic that focuses on the path to achieving outcomes with the outputs offset to show that they’re important, but not the end result of the program.  In an actual logic model or evaluation plan, we’d name the activities, outputs and outcomes more specifically based on the program. But this graphic helps keep the elements in perspective.    example graph of outputs and outcomesThe American Evaluation Association is celebrating Community Psychology TIG Week with our colleagues in the CP AEA Topical Interest Group. The contributions all this week to aea365 come from our CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Kirk Knestis at Hezel Associates back with the promised “additional post” about logic models, this time challenging the orthodoxy of including “outputs” in such representations. Irrespective of the style of model used to illustrate the theory behind a program or other innovation, it’s been my experience that inclusion of “outputs” can create confusion, often decreasing the utility of a model as evaluation planning discussions turn to defining measures of its elements (activities and outcomes).  A common, worst-case example of this is when a program manager struggles to define the output of a given activity as anything beyond “it’s done.” I’m to the point where I simply omit outputs as standard practice if facilitating logic model development, and ignore them if they are included in a model I inherit. I propose that if you encounter similar difficulties, you might do the same.

Lesson Learned – W.K. Kellogg explained in the foundational documentation often referenced on this subject that outputs “are usually described in terms of the size and/or scope of the services and products delivered or produced by the program.” As “service delivery/implementation targets,” outputs are the completion of activities or the “stuff” those efforts produce. It’s generally understood that outputs can be measures of QUANTITIES of delivery (e.g., number of clients served, hours of programming completed, units of support provided). Less obvious, perhaps, is the idea that we should examine the QUALITIES of those activities. Even more neglected, however, is an understanding that the stuff produced can be usefully viewed as a source of measures of the qualities of the activities that generated it. In short, outputs are more data sources than parts of an evaluand’s theory of action.

Hot Tip – Instead of including outputs as a separate column in tabular or pathway-style models, hold off considering them until planning gets to defining how quantities and qualities of delivery will be measured for “implementation” evaluation purposes. Making the distinction here between that and measures of outcomes assessing “impact” of activities, this approach layers a “process-product” orientation on implementation evaluation, looking at both quantities and qualities with which activities of interest are completed. This simplifies thinking by avoiding entanglements in seemingly redundant measures among activities and their outputs, and can encourage deeper consideration of implementation quality; harder to measure so easier to ignore. It also takes outputs out of the theoretical-relationships-among-variables picture; an important issue evaluations testing or building theory.

Hot Tip – Work with program/innovation designers to determine attributes of quality for BOTH activities (processes) and the stuff in which they result (products). Develop and use rubrics or checklists to assess both, ideally baked into the work itself in authentic ways (e.g., internal quality-assurance checks or formative feedback loops).

Hot Tip – Another useful trick is to consider “timeliness” as a third aspect of implementation, along with quantity and quality. Compare timelines of “delivery as planned” and “delivery as implemented” measuring time slippage between the ideal and the real, and documenting the causes of such slippage.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Archives

To top