Greetings, AEA365 readers! Liz DiLuzio here, Lead Curator of the blog. To whet our appetites for this year’s conference in beautiful New Orleans, this week’s posts come to us from the feature the perspectives of the Gulf Coast Eval Network (GCEval) members, where the uniqueness of doing evaluation in the gulf south will be on display. Happy reading!
Greetings from the Center for Research Evaluation (CERE) at Ole Miss. As senior evaluation associates primarily in healthcare (Hope) and education (Moira), we often bounce ideas off of each other and look for new ways to approach evaluation in our respective fields. In this article, we will primarily focus on healthcare with a supporting scenario from education as well.
In one such conversation, we talked about implementation science, which facilitates the practical use of research and incorporates it into real-life settings—taking what we learn to improve practice.
Through this process, we aim to improve outcomes by identifying mechanisms necessary for successful implementation of evidence-based practices (EBPs). Just as we implement methodological steps in our evaluation practice—from protocol development, data collection, statistical and qualitative analyses, interpretation and development of actionable recommendations—programs use EBPs to guide their design and implementation.
Healthcare has long focused on the use of EBPs both in research and clinical settings—but, historically, it has taken ages to put research into practice (17 years, in fact!)—thus leading to the development of implementation science. The field is focused on implementation and use to drive impacts—sound familiar? While the evaluation-to-practice lag might not be as long, surely it is an issue familiar to program staff and evaluators alike.
Moira’s article was a great resource in thinking about implementation science’s potential place in education. She described the field as “the study of the components necessary to promote authentic adoption of evidence-based interventions, thereby increasing their effectiveness.”
Implementation Science in Evaluation
This sounds an awful lot like what we do in evaluation, which begs the question—why do we not see more about implementation science and evaluation?
As evaluators, we assess implementation and impact. We look at whether programs are implemented with fidelity. We use data to support program changes and improvements. The goal of every evaluation is to give our clients actionable and useful information that will allow them to optimize implementation that supports improved outcomes.
Building frameworks for implementation of EBPs in evaluation would also allow the programs we evaluate to scale up their programming, allowing for sustainable improvements and increased adaptability. And this ultimately supports our ability to develop actionable recommendations that target a program’s goals. No matter how successful a program is, a successful evaluation will always lend itself to the development of actionable recommendations to make a strong program even stronger.
Be open to implementation science! Embrace the potential use of its frameworks in your work. (Have we made our case yet?) How could you use components (or even just theories) to encourage the use of evaluation findings or to shorten the evaluation-to-practice gap?
Consider the connection between resources and impacts. In healthcare, education and other social programs, resources are limited. More efficient implementation of best practices can take full advantage of evaluation findings and have long-term effects on program impacts.
Check out this article and this resource from the University of Washington for good primers.
In health and evaluation, our goal is to do no harm. Evidence and best practices are powerful tools for selecting and implementing interventions, but can also be critical to decisions not to do something. Check out this article on de-implementing inappropriate health interventions.
We’re looking forward to the Evaluation 2022 conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to contribute to AEA365? Review the contribution guidelines and send your draft post to AEA365@eval.org. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.
1 thought on “Learning from Healthcare: Bringing Implementation Science to Evaluation by Moira A. Ragan & S. Hope Gilbert”
Thanks for this post! I completely agree and have been applying implementation science frameworks and theory into my evaluation practice for years. I think with recent events, we can see the value of implementation science theories even more strongly than ever. I’d be interested to hear more about how your team is applying this work and whether there is an opportunity to center implementation science in evaluation in additional writings or showcasing of examples. Kudos!