My name is Jane Davidson and I run an evaluation consulting business called Real Evaluation Ltd. In my work, I advise and support organizations on strategic evaluation; provide evaluation capacity building and professional development; develop tools and templates to help organizations conduct, interpret, and use evaluations themselves; and conduct independent and collaborative evaluations and meta-evaluations.
Over several years’ working with clients and reviewing (at clients’ request) disappointing evaluation reports, I have noticed several critically important elements that make or break evaluation work but are often missing from evaluators’ methodological toolkits.
Hot tip: Clients find it incredibly frustrating to wade through an evaluation report full of evidence and still be none the wiser at the end whether the documented outcomes (let alone the entire program/policy/etc) are any good or not. A key part of an evaluator’s work is to say clearly and explicitly how practically, educationally, socially, or economically (not just statistically) significant outcomes are (severally, and as a set). This is what makes evaluation ‘e-VALU-ation’!
Hot tip: A useful tool for generating real evaluative conclusions is an evaluative rubric. This is a table describing what different levels of performance, value, or effectiveness ‘look like’ in terms of the mix of evidence on each criterion. Grading rubrics have been used for many years in student assessment. Evaluative rubrics make transparent how quality and value are defined and applied. I sometimes refer to rubrics as the antidote to both ‘Rorschach inkblot’ (“You work it out”) and ‘divine judgment’ (“I looked upon it and saw that it was good”)-type evaluations.
Hot tip: Collaborative development of rubrics is a great way to get stakeholders thinking about how ‘quality’ and ‘value’ should be defined for the work they do. It helps build the evaluative thinking needed to generate, understand, accept, and use evaluation findings.
Rad resources:
- Evaluation Methodology Basics: The nuts and bolts of sound evaluation by E. Jane Davidson (2005)
- Improving evaluation questions and answers: Getting actionable answers for real-world decision makers (AEA e-Library’s most viewed and downloaded item)
- Example rubrics in Nunns, Roorda, et al’s (2010) Evaluation of the Recognised Seasonal Employer Policy
- Example rubric (referred to as a ‘global assessment scale’) developed for the evaluation of the Corangamite Salinity Program (case study #10 in Jessica Dart et al’s 1998 Review of Evaluation in Agricultural Extension, pp. 62-63 – a publication from the Rural Industries Research and Development Corporation)
- AEA conference professional development workshop about how to use rubrics (and other evaluation nuts and bolts) to do actionable evaluations (November 10, 2010)
- Strategic evaluation of the workplace assessment program, a relevant recent chapter from Jane in the Handbook of Workplace Assessment (2010)
This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Jane? She’ll be presenting as part of the Evaluation 2010 Conference Program, November 10-13 in San Antonio.
Great article! The AEA 2010 article does not work.