Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
G’day, I’m Gerard Atkinson, a Director at ARTD Consultants, an Australian evaluation consulting firm. Most of my team work and live on the unceded lands of the peoples of the Eora Nation (Sydney), but I live and work on the unceded lands of the Eastern Maar peoples in Warrnambool. I acknowledge their unbroken connection to country and culture and pay my respects to elders past and present. I’m here to talk about rubrics and how you can use them to create clear and accountable evaluations.
Rubrics are a tool that has been in use for thousands of years and have arisen independently in cultures all over the world. At their core, they represent a way of systematically answering two questions:
- What are the qualities of a thing that are important for us to measure? (the domain/dimension)
- How do we determine whether one thing is better than another for a particular quality that we are measuring? (the scale)
You almost certainly have dealt with a rubric before – think about every time you submitted an essay in school or college or addressed a set of criteria for an application. Chances are, the person making the assessment was using some form of rubric to score it.
Rubrics are a fundamentally simple but versatile tool. Because of this, they are immensely powerful. They allow us to articulate and systematise our way of thinking, perceiving and knowing, and then communicate this with others.
Here’s a quick summary of rubrics in evaluation and why you should consider them as a tool to use as part of your work.
Why are rubrics useful?
- They can work on both quantitative and qualitative data sources (or combinations of both) – they are effectively data-independent!
- A good rubric will show a clear and articulated rationale for decision making, improving accountability.
- Rubrics can be communicated easily using a “scorecard” that sets out the findings in an easy to understand way, improving accessibility by stakeholders.
How do we make and use a rubric?
- Co-design, co-design and co-design! The best rubrics are the ones that recognise and integrate the views of stakeholders on what is important and how to differentiate it. It can be a messy process, but the end result is worth it.
- Refinement and testing are crucial to ensuring a shared understanding of the meaning of domains and scales.
- Apply the rubric using the sources of data as inputs, and as a group discuss your thinking and findings.
Where can we use rubrics?
- Rubrics are great for “apples vs oranges” situations, where you have vastly different evaluands that need to be evaluated against a common framework (e.g. individual grants awarded as part of a larger program).
- Mixed-method evaluations, where the process of synthesis can be formalised and audited using the rubric.
- Multi-program or policy evaluation, where we want to look at programs individually but also understand the collective outcomes and gaps.
When might we not use rubrics (at least not on their own)?
- When we’re evaluating things that can’t be rated and/or compared (e.g. exploratory evaluation, or looking at unexpected outcomes).
- When we need to make statistical inferences as our findings.
- Systems evaluation, where the interconnection between elements is part of and impacts what is being measured.
What is the future of rubrics in evaluation?
- We’ve developed a triple-rubric approach (a.k.a. the “Rubrics Cube”), where we enrich our primary evaluation rubric with rubrics that consider the quality of individual data sources and the strength of our findings for evaluands and domains. This makes for a more robust, nuanced and accountable evaluation.
- Interactive rubrics that use software to visualise findings can present the scorecard with data on evidence, improving communication; we can even change the weighting of rubric domains to reflect stakeholder preferences and test sensitivity of findings.
- In my spare time we’ve been testing how Large Learning Models (LLMs) such as GPT might support rubric design and delivery, and how they compare in assessing evidence when compared with human raters.
E. Jane Davidson’s book “Evaluation Methodology Basics” provides a clear and concise introduction to rubric-based evaluation.
One of the vital papers in evaluation using rubrics is that of King, McKegg, Oakden, and Wehipeihana’s 2013 paper, which showcases the variety of ways in which rubrics can be used, especially in ensuring culturally-appropriate evaluation.
Julian King does some impressive work in applying rubrics in economic evaluation, and this essay addresses the concept of subjectivity in rubrics.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.