I’m Melissa Chapman Haynes, Director of Evaluation at Professional Data Analysts and adjunct at the University of Minnesota.
Each spring I teach an Evaluation Internship course to graduate students from various sectors. And each spring this is the one time that I utter a couple of words that make me cringe. I use these words because they are used in the evaluation literature. Some of these terms are also useful in introducing certain concepts. But as someone who spends most of my time as an evaluator practitioner, I cringe every time I use these terms. This post addresses a duality of language between the evaluation literature and practice that I would like us to address.
Both words highlighted in this post are derived from evaluation – evaluand and evaluability assessment. These are not terms I would use with a client! It is not immediately obvious what either of these concepts means, and they certainly do not make evaluation more accessible.
The first term, evaluand, implies that evaluation is done to something. The Encyclopedia of Evaluation defines evaluand as “a generic term coined by Michael Scriven, may apply to any object of an evaluation. It may be a person, program, idea, policy, product, object, performance, or any other entity being evaluated.” This term is likely generic by design – but is it necessary to have a generic term to describe what is evaluated? If I am doing a program evaluation, then what I would evaluate is the program. If it is a principles-focused evaluation, it is the principle.
There are various online templates or tools for the second word, Evaluability Assessment, including this tool and this briefing. I’ve found some of these to be useful as teaching tools, particularly with evaluators newer to the field. These tools, like most tools, provide a starting point to guide the process of looking at how one would even go about determining if an evaluation should happen or not. But these make me cringe for two reasons. First, it’s not a term I would use with clients. I find it much more useful to use the Program Evaluation Standards, particularly feasibility and utility. Second, these assessments put all of the focus on the evaluation and none on the evaluator. Let’s have a greater emphasis on matching the skills of an evaluator to the specific context, evaluation approach employed, and credibility of an evaluator or evaluation team to do the work.
Hot Tip:
Words matter: “Language is power, life and the instrument of culture, the instrument of domination and liberation.” – Angela Carter
Rad Resources:
- There is a brief discussion of alternatives to evaluability assessment (as well as many resources about evaluability assessment) here: https://www.betterevaluation.org/en/themes/evaluability_assessment#eval_assess_7
- Program Evaluation Standards, 3rd http://www.jcsee.org/program-evaluation-standards-statements
- AEA Competencies: https://www.eval.org/page/competencies
The American Evaluation Association is celebrating A Look at Language Week where a group of Minnesota-based evaluators working in justice and equity spaces contribute articles reflecting on the words we use. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Thanks for your comments. I agree that assessing the extent that an evaluation is feasible is important. My argument regarding the word “evaluability” is two fold. First, I think the term is awkward and I don’t care to use it when I’m working with stakeholders or discussing with clients. Words are to convey concepts and this word is not useful in conveying the varied aspects of the concept of evaluability. Second, many aspects of evaluability do not take into consideration the role of the evaluator or evaluation team in conducting the work OR the varied contextual or cultural considerations. Not all evaluators are well suited for any and all types of evaluation.
So, I would expect the direction of travel would be to increase the scope of the word ‘evaluability’, to include the evaluator’s role and capabilities than dismiss the word altogether.
I do believe there is value in an evaluability assessment. We should not be committing resources, to ask questions, for which the answers are not yet available, or already known or available through routine programme documents. An evaluability assessment keeps a check on these. If clients are spared resources, that are limited to start with, that does achieve great value.