AEA365 | A Tip-a-Day by and for Evaluators

Feb/15

11

SWTIG Week: Brandon W. Youker on A couple thoughts on the ethical implications of goal-achievement evaluation

Welcome to my ramblings on evaluation. I’m Brandon W. Youker, social worker, evaluator, and professor at Grand Valley State University in Grand Rapids, Michigan.

I’ve been thinking about the inculcation of many professionals during their graduate studies where they are taught to equate program evaluation with the assessment of goal-achievement. Students learn about goal-setting and then about things like theories of change and logic models. I don’t deny the legitimacy of these tools for monitoring your own programs, but relying on them as the sole strategy for evaluation leads to partial stories. According to the AEA’s Guiding Principles for Evaluators, evaluators have a responsibility to “consider not only immediate operations and outcomes of the evaluation, but also the broad assumptions, implications and potential side effects”

Some common assumptions regarding goals and some counterpoints follow.

  1. The goals and objectives of the program funders, administrators, and managers are the ones that matter. What about the consumers’ or other stakeholders’ goals?
  1. The official goals and objectives are clearly articulated and agreed upon. Often, however, goals and objectives are written by a group of executives and managers. Again, what about the consumers’ goals?
  1. Goals and objectives are relatively static. So what happens when conditions change? Should the evaluator simply scrap the old goals and adopt new ones or keep irrelevant goals?
  1. Program administrators—and evaluators—can predict outcomes. Even if they could predict outcomes they tend to search only for positives ones. Goal-based evaluation by design gives little—if any—attention to program side-effects.

Lessons Learned: Program administrators feel that funders want goal-achievement evaluation.

On numerous occasions, I’ve been part of conversations with program administrators that sound something like the following:

Program Administrator: “Look at this but not that.”

Me: “Why not examine that area?”

PA: “Because we aren’t trying to do anything in that area.”

Me: “But isn’t that a critical area? And what if you were doing poorly there, wouldn’t your program suffer?”

PA: “Yes, but our funders don’t give us money to do anything in that area and therefore we don’t intentionally attempt to do anything with it.”

Hot Tip: Explore evaluation tools that don’t dictate goal-orientation. For example, Most Significant Change and Outcome Harvesting investigate outcomes without requiring evaluators to reference stated goals or objectives.

Rad Resources: Scriven’s entry on “goal-free evaluation” in his Evaluation Thesaurus outlines some limitations of goals and objectives. Additionally, I (2014) coauthored a paper in The Foundation Review titled “Goal-Free Evaluation: An Orientation for Foundations’ Evaluations” where I pled to philanthropic organizations to consider expanding their conception of evaluation and how it should be conducted.

Thanks for your interest. Please contact me so we can discuss this further: youkerb@gvsu.edu.

The American Evaluation Association is celebrating SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to aea365 come from our SWTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

1 comment

  • Bethany · February 12, 2015 at 9:02 am

    Brandon, thank you so much for laying out those assumptions, counterpoints, and the sample dialogue. Especially after reading the dialogue, it finally dawned on my why I’ve felt uncomfortable, frustrated even, with my program staff and how they judge the quality of my evaluation work based only on the things they are trying to do, not the things that are most important to consumers. I feel less alone now knowing that my evaluation approach wasn’t just me going rogue, inventing irrelevant methods, not understanding evaluation, or not understanding my program. It was me thinking like an evaluator with the public good in mind. Thank you for making me feel less crazy.

    Reply

Leave a Reply

<<

>>

Archives

To top