AEA365 | A Tip-a-Day by and for Evaluators

TAG | metaevaluation

Greetings! I’m Sara Vaca, independent consultant at EvalQuality.com and Creative Advisor of this blog. I started with this post (link) observing where and how evaluation can use a dash of creativity, and now I’m going to share my experience using creativity to better understand evaluation.

After my first AEA conference in Washington D.C. (October 2013), during my daily stroll, all the words and concepts I had been hearing during that week –mixed methods, rubrics, approaches, participation, values, dashboards, etc.- were flying around in my head. I was wondering: there are so many different possibilities (stance, paradigm, approach, methods) to design an evaluation, and yet they are not clearly visible in evaluation reports…

Suddenly, it all clicked in my mind and I thought: What if you could see many of these evaluator’s decisions in just one page? I know! I will create a “meta-evaluation” dashboard!

Some months later, after much reading and research and many sketches and drafts, I came up with this dashboard, where you can see reflected in a very visual way 10 (for me) major issues of an evaluation:

  1. Complexity
  2. Purpose ranking
  3. Evaluative synthesis thermometer
  4. Participation scan
  5. Sampling decisions
  6. Mix-methods scan
  7. Core tools
  8. Credible evidence
  9. Evaluation standards and
  10. Evaluation outputs.

Vaca 3

Initially, the Dashboard can be used to visualize the evaluation methodology of an evaluation report after its completion. Or it can be used by evaluators to explain the methodology they have followed.

Also it is a tool for meta-evaluating and quality assurance. But, it can also be used to visualize an evaluation design prior to its realization. Finally, it can be useful in discussing evaluation design with evaluation commissioners, to explore various options. And, it could be used to show the evaluation design proposed by the commissioner in the Terms of Reference. Or, even to teach evalution.

I presented it as a poster in both European Evaluation Society and Evaluation 2014 conferences (Dublin and Denver) and I would like to thank all the comments and feedback received, from the people who didn’t understand it at first, to those who told me that it was inspiring. Special thanks to Michael, Scriven, Jennifer Greene, Patricia Rogers, Jane Davidson, Beverly Parsons, Ian Davies and many others who took the time to take a look at it and comment on it.

For more information: http://www.evalquality.com/the-meta-evaluative-dashboard/

For reactions and comments: Sara.vaca@EvalQuality.com

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Our names are Eun Kyeng Baek and SeriaShia Chatters and we are an evaluation team and doctoral students at the University of South Florida. The dynamics of a metaevaluation team can determine the overall success of a metaevaluation. Program Evaluation’s Metaevaluation Checklists help guide a metaevaluation, however the dynamics of the team must be considered when leading a metaevaluation. Here we will share a few helpful hints to help improve the dynamics of a metaevaluation team and ensure a smooth, successful metaevaluation.

Lesson learned

Communication: During metaevaluation meetings, observe and listen more than you talk

It is important to understand the ‘how’ and ‘why’ behind your team members’ communication styles. Although you may be familiar with each member’s communication style outside of the team, understand that the dynamics of the team can alter individual’s communication styles and undermine the success of the metaevaluation. Since nonverbal communication dominates 75% of a person’s message, observing your team members during a meeting can provide cues to possible problems within the inner workings of your team. Encourage candid, open communication balanced with professionalism and respect for each team member.

Diversity: Embrace the diverse backgrounds of your team and utilize areas of expertise

Each team member will bring their culture, expertise, and knowledge to the table. Embrace these differences and use them to strengthen the team and the outcome of the metaevaluation. Empathetic listening is an important technique to use. Empathetic listening involves listening to understand your team member’s worldviews and allowing yourself to see the metaevaluation from their point of view.

Leadership: Recognize team member strengths and limitations

Team leaders should recognize team member strengths and limitations, and ensure each team member is assigned a task(s) that utilizes their strengths. Understanding team member roles is beneficial information to be added to the toolbox of any metaevaluation team leader. Some of the advantages of understanding team member roles include increased team effectiveness; increased team cohesion; a better understanding of the underlying dynamics associated with working in a team; and the possibility of profit increases due to better productivity.

Conflict: Employ effective, ethical methods to diffuse conflict

Team leaders should employ effective and ethical methods to diffuse conflict when they recognize difficult team members. Some useful conflict resolution techniques to have in your tool box are the art of persuasion, smoothing, and conciliation. Persuasion includes providing the other side with factual evidence on a position’s correctness and pointing out how the proposition will benefit the other side. Smoothing and conciliation involve emphasizing the similarities of two parties, pointing out common philosophies, and avoiding negative interaction. The key here is to reduce tension and increase trust between two parties.

Here’s to a smooth, successful metaevaluation!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

My name is Lori Wingate. I am a Principal Research Associate at The Evaluation Center at Western Michigan University. Two closely related topics I return to frequently in my research, practice, and teaching are metaevaluation and the Program Evaluation Standards (Joint Committee, 1994).  Here I share some lessons learned from my recent dissertation research on the use of the Program Evaluation Standards a rating tool for metaevaluation.

The Program Evaluation Standards are a set of 30 standards organized in four domains: utility, feasibility, propriety, and accuracy. Correspondingly, they are grounded in the principles that evaluations should be useful, practical, ethical, and valid.

Because of their applicability to a broad array evaluation contexts and widespread acceptance, they are often used as criteria in metaevaluation. Although the Standards provide a useful metaevaluation framework, there are some significant challenges to their application when a metaevaluation is focused on evaluation reports, without opportunity to gather additional information about the evaluation’s conduct.

This claim is based on my personal experience in using the Standards to evaluate reports, and is strongly supported by the findings from my study of interrater agreement in metaevaluation. Although agreement was generally low across all the standards, the uncalibrated raters had the least agreement on standards in the feasibility and propriety domains, which are largely concerned with issues related to the manner in which an evaluation is carried out. With only reports in hand to judge the evaluation, raters had to infer quite a bit in order to make judgments about evaluation process.

If you’re thinking of conducting a metaevaluation in which you will use the Program Evaluation Standards as criteria and you have only evaluation reports for data, here are some tips and resources that may help make it a more valid and useful endeavor:

Hot Tip: Select only those standards on which judgments can be made based on information that is typically included in evaluation reports.

Rad Resources: Check out the Program Evaluation Standards at www.jcsee.org.  Watch for a new edition to be published this year. A review of Dan Stufflebeam’s Program Evaluation Metaevaluation Checklist will help you get started in determining which standards will be feasible for use in your metaevaluation.

Hot Tip: If you want to look at several reports produced for a by a single organization or in a single content area, spend some time developing tailored criteria for that context.

Rad Resource: ALNAP’S Quality Proforma is an instrument designed for assessing humanitarian action evaluation reports. The criteria are tailored to the domain in which the evaluations were conducted and are focused on report quality.

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org.

·

Archives

To top