CREATE Week: CAEP Rubric Creation and Revision: A Basic Refresher on Measurement Concepts by Jacqueline Craven

Greetings, evaluators! I am Jacqueline Craven, the doctoral program coordinator responsible at Delta State University for aligning goals and outcomes with Council for the Accreditation of Educator Preparation (CAEP) standards. I write to those of you working with teacher education program personnel undertaking the same challenge. While professors and assessment professionals in universities across the nation continue efforts toward aligning key program assessments with still new/relatively unfamiliar CAEP accreditation standards in education, a quick refresher on the concepts of measurement can serve as a timely reminder for those responsible for these assessments.

Many will recall this information but may not immediately connect how it relates to rubrics. By understanding the levels on which we measure constructs such as math knowledge or linguistic performance, leaders can better determine whether rubrics reflect accurate and appropriate types of measurement. Where there are discrepancies, we can revise and streamline our rubrics to measure attributes of student work using the nominal, ordinal, and interval/ratio scales.

As an example, let’s examine some common language found in rubrics. Descriptive terms such as “good”, “excellent”, and “superior” seemingly accompany higher scores, but how exactly do they differ? Inevitably and even with the best intentions, various raters perceive and define each differently and would thereby score student work differently. So, what does this mean for improving rubrics?

Hot Tip:

Define in precise terms what exemplifies each level. Rather than use broad descriptive terms for processes or outcomes, clarify the characteristics of what constitutes “excellent” into tangible/actionable terms, which will then compose each rubric’s scale. Doing this for all prompts on rubrics will challenge assignment and assessment authors to specify in detail what is (and what isn’t) ideal. Pauline Dickinson and Jeffery Adams elaborate further on rubric creation best practices in their Values in Evaluation – The Use of Rubrics article published in the December 2017.

By completing this step thoroughly, achieving inter-rater reliability will be much easier as the increased level of specificity will result in a greater shared understanding and interpretation among individuals interacting with each rubric. Additionally, the explicit definitions and discussion generated from rubric revisions will also assist with informing students of how they will be assessed, which is another required component of CAEP standards. In fact, recent evidence revealed by Julie Elizabeth Francis in her 2018 article indicates higher student performance after engaging with and discussing the rubrics used to assess their work.

Rad Resources:

To begin, some brief yet helpful videos on the scales of measurement:

https://www.youtube.com/watch?v=OXTdii-b9Co

https://www.youtube.com/watch?v=A5zlhbmBghI

For quick-reference, see the CAEP Evaluation Framework for EPP-created Assessments to evaluate your own rubrics. Items 2-6 pertain directly to the measurement concepts above. And now the fun begins! Which rubric will you revise first?

 

The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.