AEA365 | A Tip-a-Day by and for Evaluators

TAG | Rubrics

Building Evaluation Capacity. David and I have teamed up to apply empowerment evaluation concepts and principles to build evaluation capacity at Google and beyond. We are using rubrics to focus learning, and student ratings to identify areas that are strong or merit attention. We are using a 3-step approach to empowerment evaluation and an evaluation planning worksheet (building on my graduate school courses with Nick Smith) to help our colleagues assess their program’s performance.

The worksheet has 4 parts:

  • describe the program to be evaluated
  • define the evaluation context (purpose and audience)
  • plan the evaluation (questions, data sources, procedures)
  • create an evaluation management plan

With little or no evaluation background needed, teams dive into the worksheet to focus on their program’s purpose and goals before setting up metrics. Laying out the evaluation plan is often illuminating — leading to refined program logic, alternative (and more meaningful) programmatic plans, and more useful ideas about how to measure processes and outcomes.

Beyond Google. We are also sharing our work with nonprofits and higher education. Through the Computer Science Outreach Program Evaluation Network (CS OPEN) Google is supporting evaluation for 12 nonprofits through a partnership with the National Girls Collaborative Project.

David and I are also co-teaching at Pacifica Graduate Institute. David highlights the 3-step approach to empowerment evaluation, including: 1) mission; 2) taking stock; and 3) planning for the future. I follow-up with our worksheet to answer questions such as:

What is the overall program purpose?

Who are the audiences for the evaluation?

How will the result be utilized and by whom?

Rubrics and Technology for Peer Review and Self-assessment. Students in our course are developing evaluation proposals that can help them conduct evaluations, solicit funding, and/or guide their doctoral dissertations. The class meets face-to-face, but includes a virtual classroom strategy that has worked well in the past. Students use rubrics to guide their self- and peer-feedback to refine and improve their work and understanding. This improves the proposals, guides instruction, and models our focus on empowerment and capacity building.

Computer screen snapshot of proposal posted online (using Doctopus) with our rubrics (in Goobrics) above to rate or evaluate the proposal.

Computer screen snapshot of proposal posted online (using Doctopus) with our rubrics (in Goobrics) above to rate or evaluate the proposal.

Rad Resources: We are using our evaluation rubric with tools that require Chrome and free extensions:

This is a partial version of the rubrics used in the empowerment evaluation at Pacifica Graduate Institute

This is a partial version of the rubrics used in the empowerment evaluation at Pacifica Graduate Institute.

Doctopus: A tool for teachers to manage, organize, and assess student projects in Google Drive.

DF3

Goobrics: This rubrics based assessment tool works with Doctopus, allowing teachers to evaluate student’s work in Google Drive.

DF4

Goobrics for Students: Allows students to use a rubric to assess peers’ documents.

DF5

Google Forms: Enables students to self-assess their work and their peers’ work using an online survey.

DF6

DF7

Please contact us for additional information!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello and kia ora, folks! I’m Jane Davidson, and I’m one of the pre-conference workshop facilitators at this year’s AEA conference in Washington DC.

Right now I just wrapped up Day 1 of a 2-day workshop called Actionable Evaluation: Getting succinct answers to the most important questions. Wow, we have great group in there, and they are brimming with really good questions!

Lessons Learned: Some of the stuff we talked about today:

  • How to rewrite non-evaluative questions so they are explicitly evaluative (i.e., they ask not just “what happened?” but “was it any good?”). This was harder than it looked, I think, but we had some fun with it!
  • My Key Evaluation Question “Cheat Sheet”, which workshop participants can tweak to create an easy set of high-level questions for their own evaluations. This will help them avoid getting lost in the details!
  • How to develop evaluative rubrics, which you can use to interpret quantitative, qualitative, and mixed method evidence. Some fantastic follow-up questions for this.

Hot Tips: From today’s Q & A:

  • Rubrics can be used for independent or participatory evaluation. If you go participatory, shoot for 3 to 10 participants; any more and it takes ages!
  • When constructing rubrics, 4 to 6 levels of performance is usually about right. If you use too many you are imposing a level of precision that simply doesn’t exist in the underlying construct in the real world.
  • Need to do a lit review but haven’t got the budget? Call 2 or 3 leading gurus on the topic and ask them for a quick live “brain dump” by phone or Skype. Cheap, quick, and credible!

Rad Resources: Related conference sessions:

Rad Resource: Key resource for the workshop:

Actionable Evaluation Basics Jane’s e-minibook; also available in Spanish as Principios Básicos de la Evaluación para la Acción (translated by the awesome Pablo Rodriguez-Bilella) – coming soon in French (translated by Ghislain Arbour)!

Actionable cover

Actionable cover Spanish - for web

 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

My name is Kim Sabo Flores, and I am the associate director of Thrive Foundation For Youth. Over the past ten years, the Foundation has partnered with leading youth development researchers, including Peter Benson, Bill Damon, Carol Dweck, Richard Lerner, and David Yaegar, to create a body of knowledge and an evidenced-based tool-kit (Step-It-Up-2-Thrive) that promotes thriving in youth. At its core, our approach is youth-driven and nurtures the ongoing learning and reflection of both youth and adult staff members. This is accomplished by using a set of rubrics, developed in concert with Dr. Richard Learner, that support youth and adults to stop and reflect on a young person’s goal-directed skills and the “Six Cs” of positive youth development. These rubrics are incorporated at given points in Step-It-Up-2-Thrive lessons.

Hot Tip: Rubrics are terrific tools for three reasons:

#1 They Promote Intentional Dialogue and Reflection Between Youth and Adults: Rubrics are a terrific way to engage youth and adults in explicit dialogue and reflection about their work together. As adults and youth use rubrics to ignite their conversations, they become even more intentional in their work, develop a common language, and create targeted strategies that are deeply meaningful to youth.

#2 They Are Robust Evaluation Measures: Rubrics are terrific evaluation tools because they capture youth and adult perspectives, at multiple points in time, about a given youth’s development.

#3 Rubrics Support Staff Development: The explicit language of the rubrics allows staff to observe and talk about youth within a similar framework. Rubrics become a terrific staff development strategy when staff debate vignettes of youth in a process that calibrates them to think similarly about evidence of youth behavior patterns, allowing them to identify strengths and challenges that can be used to drive programming.

Rad Resource: www.stepitup2thrive.org

All of the rubrics are being scientifically validated by Tufts University. They can be previewed on the Step-It-Up-2-Thrive website. The pilot data is showing rubrics are accurate measures of the six Cs of positive youth development and intentional self-regulation, otherwise referred to as goal-directed behavior. The Tufts team is currently conducting phase two of its study to explore the measure’s validity at capturing growth over time. This study should be completed by early 2012. All revised measures will then be released to the public domain and available on the website, at no cost.

By summer 2012, Step-It-Up-2-Thrive will be offering rubric certification that will give you all the tools and skills necessary to calibrate staff members. To learn more, join us during our AEA session on Saturday, November 5th, entitled: From Positive Youth Development to Full Potential: Rubrics That Shift Practice and Evaluation. Contact me to join our Thrive mailing list at: kim@thrivefoundation.org

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Kim and colleagues? They’ll be presenting as part of the Evaluation 2011 Conference Program, November 2-5 in Anaheim, California.

· ·

I am Dr. Jill Ostrow, an Assistant Professor of Teaching in the Department of Learning, Teaching, and Curriculum at the University of Missouri. I coordinate and teach a yearlong online capstone graduate course titled, Classroom Research. The first half of the course is devoted to learning about Classroom Research: developing the question, collecting data, and beginning to write the literature review. The second half of the course is mainly devoted to writing the paper. The students write the paper in sections and receive many comments on each draft they submit. Their final paper is assessed on a rubric that was developed long before I arrived at the university, and as all rubrics, has been modified, updated, and tweaked in the years since it’s creation. I have found the following useful when using such a rubric with my graduate students:

Hot Tip: Make sure to rewrite the highest section (if you use points) of the rubric word-for-word directly into the instructions for each given section of the paper. That way, the student will know what to expect right at the start of the writing process.

Hot Tip: After the student has written the final draft of each section of the paper, send along just that section of the rubric. I cut and paste the individual sections right into a Word Doc. Ask the student to do a self-assessment using that section of the rubric. Once you receive the students’ self-assessment, compare yours against it. Often, I find this is where confusions and misconceptions hide between student and teacher.

Hot Tip: Often with rubrics, students fall into the middle two categories. I often highlight words and/or phrases of one box in a scoring category and words and/or phrases from another.  If relying on points, this can become difficult to score, but again, this is where negotiation between student and teacher is important.

Hot Tip: On the final assessment, it is important to write comments and not just fill out the rubric. But it is also useful to note some of the comments the student wrote on the self-assessments if you found them to be thoughtful and constructive.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Jan/11

13

Elise Laorenza on Rubric Development

Hello fellow evaluation colleagues. My name is Elise Laorenza. I work for The Education Alliance at Brown University as a research and evaluation specialist. Working in evaluation settings for eight years, I’ve often used rubrics that others have developed to examine qualitative data (i.e., classroom observation, student work, teacher professional development). In responding to a proposal to evaluate a summer learning program, there was a need for an implementation rubric that aligned closely with program activities and goals. A seemingly simple process in the proposal, we found the development our own implementation rubric exciting, but not simple. With the goals of getting beyond a checklist, we dreamed of an instrument that would not only result in a reliable description and measure of implementation, but also a tool for program planning and decision-making. Reflecting, we share below what we were thankful we did and what we wished we did differently.

Lessons Learned:

  • We didn’t underestimate the usefulness of grounding rubric categories and features in published research. Naturally, we turned to the literature on effective summer learning programs to use these features in our rubrics; however, we relied heavily on a series of quasi-experimental studies with which the program staff were familiar. This was essential to getting buy-in for the use of our rubrics.
  • We were reluctant to get “outside the box” in labeling our rubric anchors. Most rubrics have traditional anchors that consist of either numbers or descriptors (exemplar, operational, satisfactory, etc.). We chose somewhat traditional anchors (0: not present to 3: fully operational) given that the purpose was to assess implementation; however, several stakeholders questioned what these terms meant (we did at times too). Getting outside the traditional anchor realm, might have provided a more accessible interpretation of implementation scores.
  • We incorporated multiple opportunities for description, and thereby had several strategies to establish reliability. The literature provided not only key features, but descriptions of best-practices in implementation of these features. We used both. Additionally, we offered descriptions of observation evidence within the rubric to justify scoring. Theses processes resulted in strong correlations among rubric features and high levels of consistency across program implementation scores.

Hot Tip – Be prepared to defend: Before the data was collected and after reliability was established, we were asked to defend the rubric. While feeling like a defense attorney is often the norm for external evaluators, defending the rubric was not as simple as defending a latent variable with a Cronbach’s Alpha. Being transparent about our process helped, but was not always enough.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Archives

To top