Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

OPEN Week: Elizabeth O’Neill on When not to evaluate

I am Elizabeth O’Neill, Program Evaluator for Oregon’s State Unit on Aging and President-Elect for the Oregon Program Evaluators Network. I found myself on this unlikely route as an evaluator starting as a nonprofit program manager. As I witnessed the amazing dedication for producing community-based work, I wanted to know that the effort was substantiated. By examining institutional beliefs that a program was “helping” intended recipients, I found my way as a program evaluator and performance auditor for state government.  I wanted to share my thoughts on the seemingly oxymoronic angle I take to convince colleagues that we do not need evaluation, at least not for every part of service delivery.

In the last few years, I have found tremendous enthusiasm in the government sector for demonstrating progress towards protecting our most vulnerable citizens. As evaluation moves closer to program design, I now develop logic models as the grant is written rather than when the final report is due. Much of my work involves leading stakeholders in conversations to operationalize their hypotheses about theories of change. I draw extensively from a previous OPEN conference keynote presenter, Michael Quinn Patton, and his work on utilization-focused evaluation strategies to ensure evaluation is intended use by intended users. So you think I would thrilled to hear the oft-mentioned workgroup battle cry that “we need more metrics.”  Instead, I have found this idea to warrant more naval-gazing and less meaningful action.  I have noticed how metrics can be developed to quantify that work got done, rather than to measure the impact of our work.

Lesson Learned: The excitement about using metrics stems from wanting to substantiate our efforts and to feel accomplished with our day-to-day to activities. While process outcomes can be useful to monitor, the emphasis has to remain on long-term client outcomes.

Lesson Learned: As metrics become common parlance, evaluators can help move performance measurement to performance management so the data can reveal strategies for continuous improvement. I really like OPEN’s founder Mike Hendricks’ work in this area.

Lesson Learned: As we experience this exciting cultural shift to relying more and more on evaluation results, we need to have cogent ways to separate program monitoring, quality assurance and program evaluation.  There are times when measuring the number of times a workgroup convened may be needed for specific grant requirements, but we can’t lose sight of why the workgroup was convened in the first place.

Rad Resource: Stewart Donaldson with the Claremont Graduate Institute spoke at OPEN’s annual conference this year with spectacular response. Program Theory-Driven Evaluation Science: Strategies and Applications by Dr. Donaldson is a great book for evaluating program impact.

The American Evaluation Association is celebrating Oregon Program Evaluators Network (OPEN) Affiliate Week. The contributions all this week to aea365 come from OPEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.