Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Simplifying Program Evaluation for Teams of Non-Evaluators by Sarafina Robinson Ndzi

Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.


Author Sarafina Robinson Ndzi

Hi! My name is Sarafina Robinson Ndzi, and I’m a program evaluator at The Consultation Center at Yale. I consider ways to demystify program evaluation for non-evaluators. I believe in communicating with simplicity and making complex ideas understandable. I strive to balance rigorous methodology with commonsense clarity.

I have facilitated various program evaluations, including:

  • Assessing the effectiveness of a community health worker training initiative
  • Evaluating an afterschool music program for students in a low-income community
  • Conducting a needs assessment for a family-based counseling initiative
  • Understanding the impact of a school-based intervention on high school students’ understanding of the consequences of underage drinking

Through each project, I’ve learned ways to communicate so that teams and program participants make good use of evaluation findings.

Getting Started

When I’ve asked teams, “What words come to mind when you hear ‘program evaluation’?” I’ve often been met with expressions of confusion or disinterest. Common responses include “quality assurance,” “surveys,” and “compliance.” These responses often yield head nods as team members agree with one another. I’ve followed up by asking, “What is your previous experience with program evaluation?” I once heard someone share that they worked with an evaluator who collected data from their program participants but never shared the results. (Yikes!) Another team shared that they felt reprimanded for not meeting benchmarks set by their funder. (Ouch!)

Lessons Learned

Responses like these have helped me explore what teamsrather experience. Here is what I have learned so far:   

Address Teams’ ‘Burning Desire Questions’

An essential part of the evaluation planning process is discussing what questions teams want answered. Often, teams focus solely on what their funder requires. As a program evaluator, I ask what they personally want to learn from an evaluation. This sparks creative brainstorming. For example, a team once shared, “We want to know what ‘success’ means to our program participants,” which differed significantly from their funder’s vague request to measure impact. The team clarified that they wanted ‘impact’ to be defined by what their participants considered important. They articulated that reduced trauma symptoms, increased emotional regulation skills, and access to social service resources among participants were crucial. Based on this input, we devised an evaluation plan with these indicators of success in mind.

Provide Information in an Understandable Manner

It is important to use common language, reduce jargon, and summarize findings in digestible ways using data visualizations and easy-to-read reports. The era of cumbersome evaluation reports is over. Reports should be useful to the evaluated program. I rely on frameworks like Results-Based Accountability and books like A Short Primer on Innovative Evaluation Reporting for tips on creating effective and easy-to-understand evaluation plans and reports.

Agreeing on definitions of commonly used phrases ensures everyone has a common understanding and avoids miscommunication. Ultimately, we want to know how well a program achieved its goals (impact evaluation) or how well it was implemented (process evaluation). By communicating clearly, concisely, and effectively, program evaluators can do this effectively.

Emphasize Continuous Learning, Not Compliance

Program evaluation is an opportunity to learn what’s going well, what needs improvement, and what might need to change. Collecting meaningful data that answers teams’ ‘burning desire questions’ and using understandable language are crucial for data-driven decision-making. It’s important to set the stage that program evaluation is not a compliance check, but a tool for continuous learning. Continuous learning is about expanding knowledge and skills, not solely about a compliance check. Program evaluators play a unique role in emphasizing learning over compliance.

I often say, “This is not about receiving a grade. This is about learning. What would you like to learn from our process?”


Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.