Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Julia Williams on Developing the Half-naked Rubric to Define Learning Expectations

Greetings Colleagues from the yet-chilly north shore of Lake Superior. My name is Julia Williams, and I teach courses on assessment and evaluation in the Department of Education at the University of Minnesota Duluth. I have also had the great luck to work as an evaluator with various initiatives.

Lessons Learned: Valid measurement of the effectiveness of trainings can, indeed, be illusive, especially in projects where learning outcomes have not been articulated beyond attendance and participant satisfaction. I would like to share with you a process that I have found to be both effective and educational to collaboratively define learning expectations for target participants. I call it “Half-Naked Rubric Building.” It begins with assembling representatives from all stakeholders, either in person or electronically via Googledocs or wikis.

Step One: The constituents define, by consensus, the major skills, knowledge or understandings that should result from the proposed training relative to the goals of the initiative. These elements can be part of a holistic description or trait-analytic as appropriate to the project.

Step Two: The group defines, by consensus, what it would look like if the participants in the trainings really got it” in regard to the named skills, understandings, and knowledge elements.

Step Three: The group defines, by consensus, what it would look like if the participants’ skills, understandings, etc. were minimally acceptable for each goal.

What the group will build is a “Half-naked Rubric” that can be displayed in table form. The rubric will not go in to detail regarding subtle distinctions between scoring levels 1-2, 2-3, or 3-4, as these distinctions may overwhelm the process and provide negligible value to the evaluation. Its creation can, however, provide a platform for common creation of expectations via consensus of stakeholders that is the result of authentic negotiation and essential conversations.

Expectations Minimum level of quality (1) 2 High level of quality (3) 4
Identified skills, knowledge and understandings Scoring criteria for acceptable work or performance Scoring criteria for very good work

Hot Tip #1: If you should ever choose to collaboratively build a half-naked rubric, you may wish to first make certain that you accept the following:

Premise #1 – Rubrics define quality of product or performance. Scoring criteria are empirical, not relative or comparative.

Premise #2 – Rubrics clearly define achievement targets, and are used to plan instruction and to guide the learner.

Hot Tip #2: In face-to-face sessions, using a carousel methodology has proven to be a good idea. In electronic venues, establishing ground rules for editing is pretty essential to retain collegiality.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

2 thoughts on “Julia Williams on Developing the Half-naked Rubric to Define Learning Expectations”

  1. Hi Kathleen and thanks for commenting. A carousel is an instructional method that can be used to facilitate group creation of belief statements, plans, curriculum and many other collaborative tasks. The key component of a carousel is that the large group is broken into smaller groups that move in rotation from one station to another, eventually contributing to each component of the task. A form of carousel that has worked for me in creating rubrics begins with the logic model or theory of change components that relate to the identified training. Each of these components is a station in the carousel. The large group then is parsed to each station with the task of identifying domains. Then, the smaller groups rotate,in parts or wholes, to the next station to comment or revise the work of the previous group and begin to draft the scoring criteria, and so on.

  2. Hi Julia,
    I like your suggestions for developing a rubric. But, could you please explain “carousel methodology”? Don’t think I’ve heard of that before.

Leave a Reply to Julia Williams Cancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.