CP TIG Week: Debra Rog on Historical Perspective on Evaluability Assessment
Hi! My name is Debra Rog and I have been interested in EA’s application and utility since grad school. I’m thrilled to contribute this blog on evaluability assessment (EA). In fact, EA was the topic of my dissertation nearly 30 years ago! Now, as an evaluator working at Westat, I’ve been able to use EA both formally and informally in a range of efforts.
I’ve watched EA rise in use in the last decade or so after a long period of diminished use in the late 1980s – 1990s. It is a tool in the evaluator’s toolkit that can improve the targeting of our evaluation efforts. With an eye toward maximizing our evaluation funding, EA can help reduce waste on premature evaluations and improve the focus and planning of those that do occur.
EA was developed to assess a program’s ‘readiness’ to be evaluated against its outcomes. Joseph Wholey and colleagues in the late 1970s discovered that many federal evaluations were not useful to managers, in part because they were yielding null or negative results with little information to make decisions. Upon investigation, Wholey and colleagues found a number of reasons for these results, including evaluations being conducted: on programs that were not fully developed and some not even in place; against goals that were stated primarily for obtaining funding and were often very vague and unrealistic; and with measures of outcomes that were not fully agreed upon by key stakeholders. Therefore, Wholey and colleagues developed EA as a tool to assess these features and others BEFORE undertaking an evaluation.
Lessons Learned: EA is a practical tool that can be used as is or modified for many pre-evaluation situations. In addition to using EA to assess the readiness of a program for an evaluation, I’ve found it be useful in my own work in:
– selecting program sites to include in a multisite outcome evaluation;
– providing quick information to program funders to guide technical assistance and other supports (especially in programs with multiple sites)
– guiding the development of new programs and initiatives.
Even in situations where funding has not been specifically allocated for EA, I have used an abbreviated approach (typically involving only key document review and key informant telephone calls) to learn more about a program’s goals, level of implementation, context, and so on to help in the planning of an evaluation. In many ways, ‘evaluability’ is a perspective that is helpful to have before engaging in an evaluation.
Rad Resources: A few relatively recent useful resources:
Evaluability Assessment to Improve Public Health Policies, Programs, and Practices, (2010) by Laura Leviton et al.
Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations (2013) by Rick Davies.
The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.