Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Engaged and Empowering Evaluation: Leveraging the Expertise of Stakeholders within Non-Profit Evaluations by Tom Summerfelt

Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.


Dr. Tom Summerfelt

Have you ever heard something like:

“So, Doc, we need 45 community health programs evaluated, when can you get that done?”

Or

“We received funding for our comprehensive community program that requires evaluation, we budgeted $10,000 for evaluation, what can you do?”

Within the non-profit sector, we will never assemble an army of evaluators large enough to meet the demand. So, what can we do? My name is Dr. Tom Summerfelt and I serve as the Chief Research Officer at Feeding America. This blog presents the current participatory, engaged approach that Feeding America is implementing to respond to this obstacle by shifting its evaluation staff from direct service to empowering, coaching, and mentoring program staff to integrate evaluation with their programming. As background, Feeding America is a two-tiered, federated network with a National Organization serving 200 food banks that partner with 60,000 community agencies to serve our neighbors dealing with food insecurity. In 2021, the Network distributed over 6 billion meals and served over 53 million individuals.

Our “engaged and empowering” approach begins with creating the engagement context in two ways: shared expertise and demystification of evaluation. First, we acknowledge the expertise of program staff; they know their program and the local context better than anyone. This is critical in that it levels the playing ground—both program staff and evaluators are experts. Second, we desensitize program staff to evaluation work by reminding them that they have been exposed to the scientific method since 5th grade AND that they perform evaluation activities all the time. Purchasing peanut butter is a way to emphasize this. It is a complex evaluation problem that involves identifying indicators of importance (e.g., quality, cost, value, volume, or branding) and then applying those indicators to make the choice (outcome). While this might seem silly to evaluators, it resonates with program staff and can be empowering for them.

Lessons Learned

We have learned that after the context has been created, it is important to facilitate a conversation to answer three questions:

  1. What does success look like for each program activity? We typically capture this is a simple matrix where rows are activities, and the columns are immediate, intermediate, and long-term. This question makes the implicit assumptions/underpinnings explicit by having staff link individual program activities to their eventual goals. This also avoids bringing the often confusion-causing logic models/theories of change into the conversation. Another benefit to this approach is that it automatically adjusts to the maturity of the program—less mature programs will typically emphasize more process/implementation indicators whereas more mature programs will emphasize more outcome indicators. We lead with a conceptual articulation of the program, rather than starting with what data we can use (a common starting place for programs).
  2. How can we measure each of these points of success? We explore how each of the points of success might be measured directly, through proxy, or not at this time. Additionally, we do not spend time on distinguishing between outputs and outcomes. Frankly, this distinction is irrelevant to staff delivering programs. In addition, this evaluation planning is intended to be modified over time, i.e., the maturity of the program will dictate what success looks like and how it is measured.
  3. Then, the final question, compared to what? This is an often neglected area for program staff and a perfect place for evaluators to be creatively flexible with design options. We have adopted the principle of rigor without the mortis, meaning that we work with program staff to explore different experimental designs and jointly select one that is feasible and as rigorous as possible.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.