Hello! My name is Kate Satterfield, and I’m the Deputy Director, Evaluation and Systems Improvement at New Morning, a non-profit that promotes equitable access to contraception in South Carolina. Our flagship program began in 2017 and comprises a network of more than 70 health systems (“partners”) across the state, granted funds to improve the quality of care and provide contraceptive methods regardless of patients’ ability to pay. The program has benefitted from a comprehensive external evaluation of our activities through 2022, conducted by the Center for Applied Research and Evaluation in Women’s Health at East Tennessee State University (ETSU). Where ETSU has the capacity to empirically investigate changes in contraceptive access and use across the state, New Morning’s internal evaluation team, in partnership with technical assistance liaisons assigned to each partner—monitor processes and outputs at partner sites, working with staff to generate inquiry and inform decisions.
Because our program relies on the distributed efforts of our partners, building and maintaining relationships with coordinators at each partner has been essential for both successful implementation and to promote evaluative thinking of those successes. In retrospect, we used different strategies for these relationships at different time periods. Today, I want to share the trajectory of our journey and some lessons learned through it.
Early Stage: The first few years of our program were characterized by LOTS of learning and change among all parties. While New Morning liaisons worked directly with partners to improve care, the internal evaluation team collaborated with a small workgroup of partner staff to develop shared measures that fit across different health care contexts. New Morning also served as a bridge between the external evaluation and partners, preparing partners to receive requests from them.
Lesson Learned: At the beginning, it was necessary for staff from multiple program components to have direct relationships with partners. However, partners needed adequate introduction. For example, when liaisons built trust with partner staff, other components used them as a starting point for making connections.
Middle Stage: In part due to a need to reduce communication and burden during peak pandemic time, and in part recognition that we could operate the scale of the program more efficiently, we later shifted some of the internal evaluation team’s objectives to staff liaisons. Functionally, this meant liaisons did the first-pass review of reporting. But also…
Integrating implementation staff in the reporting process creates opportunity to use data to create inquiry about partner processes, strengthens the relationship between implementation and evaluation, and promotes evaluative thinking.
Current Stage: Since the beginning of the program, our liaisons have always done some small-scale data collection with partners for quality improvement. However, none of this extra data was structured in a way that our internal evaluation could use it and examine across the entire network. After laying the groundwork described in the earlier stages, we finally have structure to collect and aggregate data collected and used by those closest to implementation.
It seems like a no-brainer, but it’s easier said than done: implementation and internal evaluation staff should collaborate from the beginning so both can benefit from matching data. Additionally, shifting some data collection outside of a formal “reporting” process relieves some of the pressure partners feel in the power dynamics of grantmaking.
We are excited about how we continue to develop these relationships and how we’ll be able to share learning from the external evaluation with our partners as more knowledge becomes available.
The American Evaluation Association is hosting Internal Evaluation (IE) Topical Interest Group Week. The contributions all this week to AEA365 come from our IE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.