AEA365 | A Tip-a-Day by and for Evaluators

TAG | interactive

I’m Giovanni Dazzo, co-chair of the Democracy & Governance TIG and an evaluator with the Department of State’s Bureau of Democracy, Human Rights and Labor (DRL). I’m going to share how we collaborated with our grantees to develop a set of common measures—around policy advocacy, training and service delivery outcomes—that would be meaningful to them, as program implementers, and DRL, as the donor.

During an annual meeting with about 80 of our grantees, we wanted to learn what they were interested in measuring, so we hosted an interactive session using the graffiti-carousel strategy highlighted in King and Stevahn’s Interactive Evaluation Practice. First, we asked grantees to form groups based on program themes. After, each group was handed a flipchart sheet listing one measure, and they had a few minutes to judge the value and utility of it. This was repeated until each group posted thoughts on eight measures. In the end, this rapid feedback session generated hundreds of pieces of data.

Hot Tips:

  • Add data layers. Groups were given different colored post-it notes, representing program themes. Through this color-coding, we were able to note the types of comments from each group.
  • Involve grantees in qualitative coding. After the graffiti-carousel, grantees coded data by grouping post-its and making notes. This allowed us to better understand their priorities, before we coded data in the office.
  • Create ‘digital flipcharts’. Each post-it note became one cell in Excel. These digital flipcharts were then coded by content (text) and program theme (color). Here’s a handy Excel macro to compute data by color.

  • Data visualization encourages dialogue. We created Sankey diagrams using Google Charts, and shared these during feedback sessions. The diagrams illustrated where comments originated (program theme / color) and where they led (issue with indicator / text).

Lessons Learned:

  • Ground evaluation in program principles. Democracy and human rights organizations value inclusion, dialogue and deliberation, and these criteria are the underpinnings of House and Howe’s work on deliberative democratic evaluation. We’ve found it helpful to ground our evaluation processes in the principles that shape DRL’s programs.
  • Time for mutual learning. It’s been helpful to learn more about grantees’ evaluation expectations and to share our information needs as the donor. After our graffiti-carousel session, this entire process took five months, consisting of several feedback sessions. During this time, we assured grantees that these measures were just one tool and we discussed other useful methods. While regular communication created buy-in, we’re also testing these measures over the next year to allow for sufficient feedback.
  • And last… don’t forget the tape. Before packing your flipchart sheets, tape the post-it notes. You’ll keep more of your data that way.

The American Evaluation Association is celebrating Democracy & Governance TIG Week with our colleagues in the Democracy & Governance Topical Interest Group. The contributions all this week to aea365 come from our DG TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Our names are Wendy Viola, Lindsey Patterson, Mary Gray, and Ashley Boal and we are doctoral students in the Applied Social and Community Psychology program at Portland State University.  This winter, we took a course in Program Evaluation from Dr. Katherine McDonald.  We’d like to share three aspects of the seminar that we felt made it so useful and informative for us.

  1. Classroom Environment. The format of the course encouraged open and interactive dialogue among the students and the instructor. The atmosphere was conversational and informal, allowing students the space to work through sticky issues and raise honest questions without fear of judgment. Regular course activities allowed us to consider creative approaches to program evaluation and develop activities that we brought to class for other students. For example, Dr. McDonald incorporated program evaluation activities, such as Patton’s activities to break the ice with stakeholders, and Stufflebeam’s (2001) “Program Evaluation Self-Assessment Instrument,” into our classroom activities.

Hot Tip: Engage students by facilitating an open and interactive environment that fosters discussion and creativity.

  1. Course Content. The course covered both evaluation practice and theory, including the historical and philosophical underpinnings of evaluation theories. Because gaining expertise in the theory and practice of program evaluation in a 10-week course is not possible, Dr. McDonald provided us with a tremendous amount of resources for us to peruse on our own time and refer back to as necessary, as we begin working on evaluations more independently.

Hot Tip:  Provide students with templates, examples, and additional references about any activities or topics covered in order to allow them access to resources they will need once the course is over.

  1. Applications. One of the most valuable aspects of the course was its emphasis on the application of theory to the real world.  During the course, we developed and received extensive feedback on logic models, data collection and analysis matrices, and written and oral evaluation proposals. Additionally, we participated in a “career day” in which Dr. McDonald arranged a panel of evaluators who work in a variety of contexts to meet with our class to discuss careers in evaluation.

Hot Tip: Allow students to practice skills they will need in the real world and expose them to the diverse career opportunities in the world of program evaluation.

Our seminar only scratched the surface of program evaluation, but these features of the course provided us with a strong foundation in the field, and elicited excitement about our futures in evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

My name is Stewart Donaldson, and I am a Professor and Director of the Institute of Organizational and Program Evaluation Research at Claremont Graduate University. I have been helping programs and organizations develop theories of change and related types of conceptual frameworks to guide evaluations for more than 20 years.  One of the big challenges in this work is adequately conceptualizing and representing the complexity of planned interventions or change efforts.  In recent years, my colleague Tarek Azzam and I have been pioneering the application of new software to help us with this challenge.

Rad Resource: We now provide free resources on a website titled Theory-driven Evaluation to support evaluation practitioners who would like to use this approach and software to improve their work .  Provided on this site are examples of completed interactive conceptual models that you can click through and explore, links to the software (including free trials) that we use to create these interactive frameworks, and related evaluation articles and website links.  Our experiences so far confirm that clients really appreciate this approach to representing theories of change and the complexity of their hard work.  It has certainly brightened our evaluation lives.

Happy evaluating!

This contribution is from the aea365 Daily Tips blog, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to

· ·


To top