My name is Michael Harnar. I have been in the evaluation discipline for about 16 years and for the last 2.5 years I’ve been an assistant professor in the Interdisciplinary PhD in Evaluation program at Western Michigan University.
I believe that the core of evaluation is about providing judgments of value and that Michael Scriven’s general logic of evaluation (develop criteria, identify standard, measure, judge against standard) is fundamental to any evaluation endeavor. I also know that there is a universe of activity around this core function that is still part of the evaluation endeavor. But that activity should be working towards applying this logic.
Working from this presumption, how might we apply that logic to our evaluation practice? What criteria might we use to evaluate our work? There is lots of useful guidance on how to meta-evaluate. For example, Stufflebeam and Coryn, in their 2014 version of Evaluation Theory, Models, and Applications, recommend both formative and summative metaevaluation and say that the Program Evaluation Standards by the Joint Committee on Standards for Educational Evaluation and the American Evaluation Association‘s Guiding Principles are complementary tools with which to meta-evaluate.
As part of a 2018 exploratory study of evaluation quality, we (Jeffrey Hillman, Cheryl Endres, Juna Snow, and I) asked a random selection of 1,000 AEA members, who self-identified as either “evaluator” or “consultant” in their membership profile, how familiar they were with the Program Evaluation Standards. Of the 142 that responded to the question, 32% had not heard of them and another 20% we only “slightly familiar.” Despite all the caveats of response rate and representativeness, I was still struck by how few had not even heard of the standards that our association purports to support.
Peter Dahler-Larsen, in his 2019 book, Quality: From Plato to Performance, describes a “quality as practice” perspective that is similar to Tom Schwandt’s hermeneutics of practice. From this perspective, all evaluation is context-dependent and the evaluator, if they are well-trained and understand the nuances of evaluation theories and the rough ground of the context, are in the best position to understand what the highest quality evaluation practice would look like in that situation. Any attempt to build some instrument to measure it will inevitably need to be adjusted for each situation, making evaluation practice almost impossible to ever reliably judge. Lest this seem a bit nihilistic, let me bring us back to the idea of standards.
Lesson Learned:
Some respondents to our study said that they regularly reflected on the Program Evaluation Standards and AEA’s Guiding Principles to assess their work. They documented the review and the improvements they made and shared it with their client, showing they cared about doing quality evaluation work. As simple as it sounds, this fits the textbook definition of internal formative metaevaluation!
This week, we’re diving the Program Evaluation Standards. Articles will (re)introduce you to the Standards and the Joint Committee on Standards for Educational Evaluation (JCSEE), the organization responsible for developing, reviewing, and approving evaluation standards in North America. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.