My name is Lori Wingate. I am a Principal Research Associate at The Evaluation Center at Western Michigan University. Two closely related topics I return to frequently in my research, practice, and teaching are metaevaluation and the Program Evaluation Standards (Joint Committee, 1994). Here I share some lessons learned from my recent dissertation research on the use of the Program Evaluation Standards a rating tool for metaevaluation.
The Program Evaluation Standards are a set of 30 standards organized in four domains: utility, feasibility, propriety, and accuracy. Correspondingly, they are grounded in the principles that evaluations should be useful, practical, ethical, and valid.
Because of their applicability to a broad array evaluation contexts and widespread acceptance, they are often used as criteria in metaevaluation. Although the Standards provide a useful metaevaluation framework, there are some significant challenges to their application when a metaevaluation is focused on evaluation reports, without opportunity to gather additional information about the evaluation’s conduct.
This claim is based on my personal experience in using the Standards to evaluate reports, and is strongly supported by the findings from my study of interrater agreement in metaevaluation. Although agreement was generally low across all the standards, the uncalibrated raters had the least agreement on standards in the feasibility and propriety domains, which are largely concerned with issues related to the manner in which an evaluation is carried out. With only reports in hand to judge the evaluation, raters had to infer quite a bit in order to make judgments about evaluation process.
If you’re thinking of conducting a metaevaluation in which you will use the Program Evaluation Standards as criteria and you have only evaluation reports for data, here are some tips and resources that may help make it a more valid and useful endeavor:
Hot Tip: Select only those standards on which judgments can be made based on information that is typically included in evaluation reports.
Rad Resources: Check out the Program Evaluation Standards at www.jcsee.org. Watch for a new edition to be published this year. A review of Dan Stufflebeam’s Program Evaluation Metaevaluation Checklist will help you get started in determining which standards will be feasible for use in your metaevaluation.
Hot Tip: If you want to look at several reports produced for a by a single organization or in a single content area, spend some time developing tailored criteria for that context.
Rad Resource: ALNAP’S Quality Proforma is an instrument designed for assessing humanitarian action evaluation reports. The criteria are tailored to the domain in which the evaluations were conducted and are focused on report quality.