Hi evaluators – my name is Lianne Estefan. I am a behavioral scientist in the Division of Violence Prevention at the Centers for Disease Control and Prevention (CDC). As part of this week’s posts showcasing examples of translational research, I am happy to share how we adapted CDC’s Science Impact Framework (SIF) (https://www.cdc.gov/od/science/impact/framework.html), an approach that was developed to demonstrate the impacts of science, in an effort to measure programmatic outcomes. We used our adaptation of the SIF in the multi-faceted evaluation of DELTA FOCUS, CDC’s intimate partner violence (IPV) prevention program (https://www.cdc.gov/violenceprevention/deltafocus/index.html). One goal of DELTA FOCUS was to encourage recipients – domestic violence coalitions in 10 states – to contribute to a national-level dialogue on IPV prevention, defined as sharing their prevention work with a wide audience. Despite this goal, we did not have a pre-determined way to categorize how recipients disseminated their work.
The opportunity for adaptation was born! The goal of the SIF is to identify indicators of short-term events and actions that may lead to longer-term public health impact. The SIF includes five domains of influence: Disseminating Science, Creating Awareness, Catalyzing Action, Effecting Change, and Shaping the Future. Each of these domains includes multiple indicators, which can be modified for different uses. Because of this flexibility, we adapted multiple indicators to examine programmatic effects. For example, we modified “requests to contribute to efforts that further science output” (an original indicator under Creating Awareness) to “providing subject matter expertise to external organizations.” This modification was more relevant to the recipients’ programmatic work and IPV prevention expertise. All of our adaptations followed similar reasoning.
Lessons Learned:
- Flexibility helps! The SIF was a good choice for our adaptation: it was designed to be flexible. We modified some indicators so they made sense for our needs and the recipients’ programmatic work. This was critical – but came with challenges. Some contributions, for example, could fit in more than one place, so we made carefully considered decisions about how we worded indicators and coded contributions.
- Consider the timing. We used the SIF retrospectively to frame our evaluation. As a tool to examine short-term impacts, it would be even more useful to implement this framework prospectively, while planning programs.
- Examine outcomes. The SIF uses language that emphasizes impact, but we focused on shorter-term outcomes. This is especially useful for areas where it is difficult to measure long-term changes, such as IPV prevention. We were able to illustrate how the programmatic work was contributing to multiple areas, and look at steps along the way to longer-term outcomes.
- Demonstrate effective implementation. While this was outside the scope of our analysis, an adapted SIF could be used for program management. Because it can be used to identify short-term indicators of long-term outcomes, programs could have the opportunity to make changes during implementation rather than waiting for long-term outcomes.
Rad Resource:
- Our analysis using the adapted SIF provides more specific information on how we adapted the framework. https://onlinelibrary.wiley.com/doi/abs/10.1002/ajcp.12318
The American Evaluation Association is celebrating Translational Research Evaluation (TRE) TIG week. All posts this week are contributed by members of the TRE Topical Interest Group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.