Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

CREATE Week: Use of Program Evaluation Standards by Paula Egelson

Hi, I am Paula Egelson, research director at the Southern Regional Education Board in Atlanta. For this week of the AEA 365 blogs, board members from the Consortium for Research on Educational Assessment and Teaching Effectiveness (CREATE) will be sharing blogs associated with the tenants of the CREATE organization: assessment, teacher and principal effectiveness, program evaluation and accountability.

CREATE has a long history with the Joint Committee on Standards for Educational Evaluation (JCSEE) that has been housed over the years at Western Michigan University’s Evaluation Center, University of Iowa and Appalachian State University. Begun in 1974, JCSEE members representing research and practitioner organizations both nationally and internationally have created and revised the Program Evaluation Standards, Personnel Evaluation Standards and Classroom Assessment Standards.

Hot Tips:

Our focus today is on the Program Evaluation Standards and some of its uses. The Program Evaluation Standards apply to a wide variety of settings in which learning takes place. This includes schools and universities to nonprofits and the military. These 30 standards are organized around the five key attributes of utility, feasibility, propriety, accuracy and accountability. Each attribute includes the key concepts related to it, standards statements implementation suggestions, hazards to avoid, case narratives and references for further reading.

The program evaluation standards provide guidance and support reflective practice associated with:

  • Whether and when to evaluate,
  • How to select evaluators and other experts,
  • The impact of cultures, contexts and politics,
  • Communication and stakeholder engagement,
  • Technical issues in planning, designing and managing evaluations,
  • Uses and misuses of evaluations,
  • Issues related to evaluation quality, improvement and accountability.

Among other things, program evaluation standards can help evaluators resolve some common evaluation issues that are found below:

  • Stakeholder over-involvement in the evaluation,
  • Agency disagreement over the evaluation recommendations,
  • A contractor desiring an evaluation report with predetermined outcomes,
  • Stakeholders “sitting on” an evaluation report, and
  • A lack of data collection integrity (lack of timeliness related to data collection, supervisor review of employees’ survey responses, teachers reviewing an online test or survey before the administration begins, not following random sampling guidelines).

I encourage you take an opportunity to access The Program Evaluation Standards to determine how these standards can be of best use to you and your colleagues. I look forward to hearing from you about your uses of the standard and obtaining your feedback of the standards.

Rad Resources:

Detailed information about the Program Evaluation Standards

Information about the work of the Joint Committee on Standards for Educational Evaluation

For more information about CREATE, please go to www.createconference.org. CREATE’s annual research and evaluation conference will take place at William and Mary College in Williamsburg, Virginia, on October 11 and 12, 2018. We hope to see you there!


The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


1 thought on “CREATE Week: Use of Program Evaluation Standards by Paula Egelson”

  1. Hi,

    I am a graduate student at Queen’s University (Canada).  At the moment, I am in the midst of elaborating a PED for a social program.  In the process, I have decided to utilize a slightly modified version of the six-step framework for program evaluation in public health, which the Centers for Disease Control developed.   

    After reading your blog post, I became quite intrigued with the Program Evaluation Standards put forth by the Joint Committee on Standards for Educational Evaluation (JCSEE).  Specifically, I noticed that, in addition toutility, feasibility, proprietyand accuracy, another standard is taken into consideration as well, namely accountability.  Heretofore, I had envisioned accountability as being an embedded prerequisite of accuracy.  However, upon examining the rad resources you kindly shared, I discovered that, indeed, there is a distinction between the two.  While accuracy seeks to increase the overall reliability and veracity of evaluation findings, particularly those related to quality, accountability refers to the encouragement of trustworthy documentation practices and metaevaluation.  Nevertheless, I was wondering if you could please briefly point out in which ways better accountability might help create an atmosphere that can lead to metaevaluation.

    Changing the focus, I would also like to say that I found that list of common problems associated with evaluation use to be in accordance with what I have encountered in my course readings.  Of all the ones that were Iisted, I was drawn to the subsequent item, i.e. “stakeholder over-involvement in the evaluation.”  As a newcomer to the field, I would really like to receive guidance on how evaluators can avoid or at least minimize that issue, especially in contexts where program practitioners feel entitled to collaborate intensely.    

    Thank you for your time and attention.


Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.