My name is Julianne Manchester, Co-Program Chair for the Community Psychology TIG and the PI at Case Western Reserve University- School of Medicine for an evaluation capacity building initiative with health professionals planning educational programs. I am pleased to be discussing Evaluability Assessment in this kick-off blog post for AEA365.
What is Evaluability Assessment (EA)? According to the oft-cited founder of EA, Joseph Wholey, it is (in a nutshell) a series of steps with stakeholders to assess the probability that programs will achieve measurable objectives. In this role, I’ve had the (I think valuable) experience of seeing programs not conducting an EA become stuck as stakeholders (in this case, from clinical settings) experience shifts in organizational priorities toward continuing education of staff.
These have included unanticipated changes to data collection access with electronic medical records or senior hospital leadership priorities. Perhaps advanced work with these stakeholders through an EA process could have prevented the educational programmers from scrambling to find new sites mid-stream. But, this was necessary in order to train nurses and measure the provider changes with patients by the federal reporting deadlines.
My challenge is to disseminate an EA framework within the health professions community, particularly those implementing continuing education programs with multiple disciplines (nursing, social work, pharmacy). I hope to develop a model I can put forth within this context.
Lesson Learned: Different fields have different names for what is essentially an evaluability assessment. In healthcare-oriented research, I couldn’t even find the term until I started looking under implementation research (driven by implementation theory). This seems to be the appropriate umbrella for these and other planning evaluation activities (developing logic models, so forth) when translating evidence-based programs into practice.
Rad Resource: I found a wonderful guide to EA related to public health (and other areas) in 2010’s Evaluability Assessment to Improve Public Health Programs, and Practices available open-access through this website: http://www.annualreviews.org/journal/publhealth
The American Evaluation Association is celebrating CP TIG Week with our colleagues in the Community Psychology Topical Interest Group. The contributions all week come from CP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Pingback: CP TIG Week: Hsin-Ling (Sonya) Hung on Resources for Evaluability Assessment | evalpoint
Hi Julianne
Re your definition, I dont have any problem with the involvement of stakeholders, that was not why I questioned the definition you used.
Re your question on how an EA should relate to a ToC: Given that most EAs are done after a project has been designed and funded (with acception of the IADB examples I have cited) there would normally already be some form of ToC in place before an EA is requested. However, given what I have read it is quite likely that such pre-existing ToC will need further improvement before they are evaluable. An EA can point out how and where some improvements need to be made to a ToC.
How much attention is given to the ToC will depend in part on when during the project cycle a EA is commissioned. If it is very early on the ToC may not yet be fully articulated and it may not be too late to suggest a major reconstruction 🙂 But later on in a project’s life the room to suggest changes to the ToC probably diminishes. Too much action has already been taken and predicated on the ToC that exists up to then. I think…
I agree with ‘the definitions that Rick Davies cites, and that the definition you used does not represent the basic purpose of evaluability assessments. The EA points out the strengths and weaknesses of the TOC or outcomes framework, gaps in the available data, and also any glaring implementation failures. If the program is not yet evaluable, the assessment should lead to recommendations that will improve the program, and the program’s M&E frameworks and systems.
Also, the article you pointed to sounds really interesting, but it is not open access!
In follow up to both Bonnie and Rick, I am just happy people are so interested in the topic. As Rob Fischer has stated, it is a hard sell.
You’ve raised a good point concerning my definition. Indeed at least a couple of our blog posts recommend you as a source. Let me say I stick to the definition, in that program stakeholders are involved and the steps require their involvement. I’m defining stakeholders as those close to the program and invested in results. These could be programmers, recipients of program, and anywhere in between and external to that scenario.
However, I did not mean to imply that EA is “driven” by these stakeholders. I see EA (and I think there is at least consensus on this point) as an externally driven process required by an entity (perhaps a funder). I little word smithing issue here.
I do have a question for Rick: You discuss the Theory of Change being tested as part of EA. However, several views see a Theory of Change as being produced as a PRODUCT of the EA process. What are your thoughts?
Thanks Rick Davies for your additional source here. It is certainly a wonderful addition to any compendium on the topic. However, I think the definition provided is still in line with the literature and complementary to yours. I believe that reliability and credibility are certainly things we strive for as part of the EA process, and they are terms that have different meanings to different audiences.
Your definition of evaluablity asessment as being “a series of steps with stakeholders to assess the probability that programs will achieve measurable objectives” sound to me more like an ex-ante evaluation rather than an evaluablity assessment.
My preferred definition would be one based on the OECD-DAC definition of evaluability: “The extent to which an activity or project can be evaluated in a reliable and credible fashion” This is what an evaluablity assessment would look at.
In practice the concept of evaluability is often used in two different but complimentary ways. One is “in principle” evaluability, which looks at the nature of a project design,
including its Theory of Change (ToC) and asks if it is possible to evaluate it as it is described at present. The second is “in practice” evaluability and looks at the availability of relevant data, as well as systems and capacities which make that data available. In addition, many evaluability assessments extend their interests beyond evaluability itself. The most common extension is an inquiry into the practicality and usefulness of doing an evaluation through discussions with stakeholders.
These views are based on a recent review of evaluablity practice over the last ten years, which may have taken a different direction to Joseph Wholey’s original conceptualisation of evaluability assessment. You can find the review here: http://mande.co.uk/blog/wp-content/uploads/2013/10/DFID-Working-Paper-40-final-version-2013-10-08.pdf