Hello, I’m Dr. Rick Hoefer, Professor of Social Work at the University of Texas at Arlington, focusing on management and policy topics in my teaching and writing. I’ve been thinking about how the Pandemic has touched the evaluation of evidence-based programs (EBPs).
Questions Regarding Evaluating Evidence-based Programs
EBPs are designed to be implemented with certain resources, in a certain way, to achieve certain outcomes. This process is often laid out in a logic model to guide the program to achieve specific outcomes. The logic model also guides evaluators to determine how well the program was implemented. Activities are implemented with “fidelity” when the program design is closely followed. Fidelity assessment of EBPs is vital before assessing outcomes, and increasingly required by funders.
But in a pandemic? Much of the careful planning is impossible to enact. Perhaps, staff members were unable to implement the program with face-to-face “fidelity” and switched to on-line activities instead. How does that affect the evaluation? Did the EBPs become “tainted” in terms of expected outcomes?
Even if programs can be implemented with fidelity, other problems may surface. The considerable disruption of people’s lives means that “logical” connections between activities and outcomes may be uncertain. In other words, the service users are no longer truly the same as the users the EBP was tested with. How can evaluators cope? Issues of diversity and equity are present in COVID-infected evaluations. For example, we know women have borne a heavy share of emotional labor during the pandemic and many human services workers (and service recipients) are women—has that played a role in program outcomes? Lower-income populations have been hard-hit by shut-downs and quarantines as well—are program results impacted by local COVID infection rates?
- Disaster planning has become vital for all organizations. Many agencies have contingency plans for large-scale disasters, particularly weather events. Most were unprepared for a pandemic. Still, managers found they needed to be agile in responding to revised work expectations and problems. Given the ongoing nature of new mutations of COVID, organizations should plan for similar disruptions moving forward. Protocols about what is allowable for “adaptations” of EBPs need to be considered. Funders are part of the ecosystem and must explore the issues systematically and communicate their expectations to their grantees.
- The pandemic does not excuse evaluators from obtaining diverse views when conducting an evaluation.
- Program models needed to change to consider issues clients were suddenly and unexpectedly dealing with. On top of financial and health concerns, stress was felt by everyone. Evaluators must consider the impact on program recipients and staff when such problems emerge.
- Just as widespread remote work has shown that working in offices is less important than previously believed, social work organizations now need to consider anew what are essential methods and processes for delivering programs and assessing results. AEA has an important role in reimagining the professional response when the context changes so drastically. This may be a task for AEA’s Evaluation Policy Task Force. D’Brot helpfully discusses using Program Evaluation Standards to look at these types of issues in education settings.
- For more discussion and references regarding how the COVID-19 pandemic has impacted human services programs, see the new book by Hoefer and Watson, Program Development, Grantwriting and Implementation: From Advocacy to Outcomes, published by Cognella. Information is included in each chapter on this topic.
The American Evaluation Association is hosting SW TIG Week with our colleagues in the Social Work Topical Interest Group. The contributions all this week to AEA365 come from our SW TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.