Greetings. We are Keira Gipson, Monitoring and Evaluation Officer at the U.S. Department of State, Bureau of Conflict and Stabilization Operations and Cheyanne Scharbatke-Church, Principal at Besa, a boutique social enterprise that specializes in the evaluation of programming in fragile states. We wanted to share insights from an evaluability assessment (EA) we conducted as part of an evaluation capacity building exercise.
Hot Tip: If you are open to a variety of evaluation approaches and learning opportunities, using a return on investment (ROI) lens to analyze your EA data helps maximize evaluation utility. There are many good EA guidance notes available with criteria for determining a program’s evaluability. Some use a weighting approach to determine if one should proceed with an evaluation while others use a percentage of criteria met. We found the evaluation decision depends more on what you want to learn and the resources you’re willing to invest rather than strictly meeting a given number of criteria.
There are a few non-negotiable EA criteria when recommending an evaluation, such as being able to conduct it safely and ethically. Most, however, have nuanced implications for an evaluation that mere tallying doesn’t capture. Even the lack of a program design needn’t prevent an evaluation if the program team is willing to retroactively create a theory of change, for example, or pursue a goal-free evaluation. The significance of the criteria, in other words, depends on an evaluation’s context.
Building on Rick Davies’ work, specifically the idea of EA results representing an “index of difficulty,” we developed a decision flowchart to help work through the costs to a particular evaluation when criteria aren’t met and how those compare to the learning/accountability benefits for specific users that would result from pursuing the question.
Lessons Learned:
- EAs provide broad capacity building opportunities: An EA process offers exposure to analysis, design, monitoring, and evaluation concepts, making it an excellent introductory capacity building vehicle.
- Develop a multi-faceted communication strategy: The value in doing an EA versus an evaluation may not be immediately obvious to program staff. Plan several iterations of what one gets from an EA compared to an evaluation.
Rad Resource: We developed a version of an EA checklist specifically for those with less evaluation and EA experience. We built from Rick Davies’ work, spelling out what meeting each criterion means to help those with less experience better understand the concepts.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Is ‘Rad’ being used an adjective, or is it another TLA (three letter acronym)?
Hello Mike! We use “Rad Resources” as one of our standard blog headings, and that’s “rad” as in “radical,” so not an acronym! 🙂
Good to see that the DFID commissioned work on EAs has been found useful. I liked the introduction of a “minimal threshold” and the provision of detailed explanations for each of the criteria
In the spirit of “If you liked x you might like y” you might be interested in my more recent work, seen here: https://evalc3.net/
best wishes, rick davies