Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Evaluation, the Unsung Golden Thread in an Innovative Community-Based Program by Cammie Switzer and Emily Elliot

Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.


Hi readers! We are Cammie Switzer and Emily Elliott, evaluators at the Behavioral Health Improvement Institute (BHII) at Keene State College. BHII works shoulder to shoulder with community partners to improve behavioral health practice, equity, and outcomes.  

We are privileged to be members of the evaluation team working alongside the Adverse Childhood Experiences Response Team (ACERT). In 2015, Amoskeag Health partnered with the Manchester (NH) Police Department (MPD) and YWCA-NH to mitigate the negative effects of ACEs through the ACERT program. Following MPD intervention, a trauma-informed, multidisciplinary ACERT team – a police officer, a YWCA crisis services advocate, and an Amoskeag family advocate – outreach the family in a welcoming, inclusive, and destigmatizing manner. They attend to the family’s basic, physical, social, and emotional needs and connect them to additional community-based supports and services. 

In the aftermath of COVID and unexpected, rapid dissemination of the program beyond its developers, ACERT faced the conflicting reality of workforce turnover among leadership and staff while, simultaneously, new roles were being created and staffed. Amidst the flux, the need for continuity beyond the program’s manager was clear. As evaluators, we are often in a unique position to serve as beacons of stability in dynamic environments, guiding our co-conspirators through transitions by focusing on the essential work. We propose that evaluators are the golden thread for community-based programs such as ACERT.  

Here’s examples of how we’ve done this, plus some tools and strategies for maintaining your thread. 

Lessons Learned

Preserve and leverage historical program knowledge 

At BHII, we developed and maintain the web-based database that scaffolds and standardizes the ACERT practice. When practice changes, the app changes, and vice versa. We have built a bi-directional communication mechanism into the database that enables everyone engaged with the app to document requests, changes, and issues over time. We’ve also designed features that standardize operating procedures, require documentation of key activities, and integrate more disparate areas of program management (like scheduling). The database enables us to advance what works and avoid what hasn’t. 

Hone-in on program model (and never let go) 

Early on we co-created a logic model and fidelity tool with the program manager. As statewide replication began, it became apparent that these new contexts would present opportunities for divergence from the gold standard practice model. By regularly measuring and beating the drum about model fidelity (and drift), we’ve established necessary boundaries pointing back to ACERT’s inception. When new contexts call for changes, a clear view of the model helps the team know where adaptation of the model is appropriate and where it threatens the heart of the practice. In keeping the database in sync with the fidelity tool and logic model, the data entry process reinforces best practices and keeps day-to-day work grounded in the model.   

Emphasize data collection routines and rituals 

Something we’ve learned that helps to overcome the inevitable data collection challenges all programs face is to create monthly data integrity goals. Together with our client, we review the data that don’t appear as we’d expect, seek to understand the story behind them, and problem-solve any challenges that arise (all while acknowledging what they’ve done well, of course!). This routine helps to drive home the importance of their data entry and emphasize its role in the larger scope of the program: as an opportunity for providing feedback to partners, accountability to current funders, for soliciting new funding and telling the ACERT story. 
We’ve also tried hard to make “routine” data collection fun through things like our simulation training – cue the fictitious Frostcove community and Hopper the police dog. 

Hot Tips

  1. Turnover is inevitable and can put the integrity of a program’s model at risk. When engaging with new or seasoned program staff, lean into discussing, refining, documenting, and setting up data collection and feedback loops to support everyone’s practice. 
  1. Be open to meeting the needs of dynamic programs while maintaining model integrity. Developing this perspective can be helpful, especially when faced with new ideas, challenges, and/or staff who might disrupt the defined boundaries. 
  1. Teach data literacy. Finding ways to empower folks at all levels of the program is so important for decision making, innovation, and driving toward meaningful outcomes. 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.