Elise Laorenza on Rubric Development

Hello fellow evaluation colleagues. My name is Elise Laorenza. I work for The Education Alliance at Brown University as a research and evaluation specialist. Working in evaluation settings for eight years, I’ve often used rubrics that others have developed to examine qualitative data (i.e., classroom observation, student work, teacher professional development). In responding to a proposal to evaluate a summer learning program, there was a need for an implementation rubric that aligned closely with program activities and goals. A seemingly simple process in the proposal, we found the development our own implementation rubric exciting, but not simple. With the goals of getting beyond a checklist, we dreamed of an instrument that would not only result in a reliable description and measure of implementation, but also a tool for program planning and decision-making. Reflecting, we share below what we were thankful we did and what we wished we did differently.

Lessons Learned:

  • We didn’t underestimate the usefulness of grounding rubric categories and features in published research. Naturally, we turned to the literature on effective summer learning programs to use these features in our rubrics; however, we relied heavily on a series of quasi-experimental studies with which the program staff were familiar. This was essential to getting buy-in for the use of our rubrics.
  • We were reluctant to get “outside the box” in labeling our rubric anchors. Most rubrics have traditional anchors that consist of either numbers or descriptors (exemplar, operational, satisfactory, etc.). We chose somewhat traditional anchors (0: not present to 3: fully operational) given that the purpose was to assess implementation; however, several stakeholders questioned what these terms meant (we did at times too). Getting outside the traditional anchor realm, might have provided a more accessible interpretation of implementation scores.
  • We incorporated multiple opportunities for description, and thereby had several strategies to establish reliability. The literature provided not only key features, but descriptions of best-practices in implementation of these features. We used both. Additionally, we offered descriptions of observation evidence within the rubric to justify scoring. Theses processes resulted in strong correlations among rubric features and high levels of consistency across program implementation scores.

Hot Tip – Be prepared to defend: Before the data was collected and after reliability was established, we were asked to defend the rubric. While feeling like a defense attorney is often the norm for external evaluators, defending the rubric was not as simple as defending a latent variable with a Cronbach’s Alpha. Being transparent about our process helped, but was not always enough.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

1 thought on “Elise Laorenza on Rubric Development”

  1. I think this is a great idea Elisa. I think using a participatory method with the program staff would help to make the rubric more transparent and increase buy-in to the evaluation as it would show the degrees of success possible and perhaps put some staff at more ease.

    I developed a rubric for assessing the quality of logic models which my workshop participants have said they really appreciate when learning to develop them.

Leave a Reply to Kylie Hutchinson Cancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.