My name is Dan Kidder, and I’m the Monitoring and Evaluation Unit Lead in CDC’s Program Performance and Evaluation Office (PPEO). CDC is a large and decentralized organization. More than 90 divisions do our work, and more than two-thirds of CDC funding goes to support frontline work at state, local, and tribal health departments; ministries of health; and non-governmental organizations. Supporting those multiple players to do evaluation, much less do it uniformly, is a challenge. CDC’s evaluation framework (Monday’s post), supporting materials (Wednesday’s and Thursday’s posts), and a common template for funding application (today’s post), are moving us toward higher quality and more consistent evaluation practices.
Logic models to the rescue
In 2013, responding to requests from the Advisory Committee to the CDC Director and funding recipients for more consistent funding announcements, CDC revised its application template for funding opportunities (Notice of Funding Opportunity, or NOFO). The template required several evaluation-related elements – a simple logic model and a section on evaluation and performance measurement. The logic model requirement did just what we evaluators know it can do: helped CDC programs clarify what they want recipients to do with the funding, and the outcomes to be achieved in the project period.
All aboard!
The inclusion of the simple logic model and evaluation and performance section had another effect. It got program staff and evaluators working together early on – before the end of the project period. In other words, the evaluators got a seat on the train before it left the station, rather than three-quarters of the way to the destination.
Stronger applications, stronger programs
When CDC programs use the logic model as an outline for a funding announcement, the result is a more consistent document. There’s a consistent story line in the narrative statement of work, the performance measurement and evaluation section, and the recipient work plan. Without that, it’s hard for applicants to respond well to the NOFO about how to plan and evaluate their work. It’s also hard for CDC to explain post-project what was done with the funding and what the impact was.
We’re now seeing the payoff of this approach. Some of the first CDC programs to use the new template are up for renewal, and these programs are using data collected from their recipients to refine the approach in their new NOFO. With one success under their belt, these programs are even more likely to involve their evaluators from the start of NOFO development.
Lessons Learned:
- A logic model is evaluation’s foot in the door. Requiring even a very simple logic model is an early and easy way to see if there is clarity and consensus about the funded program’s activities and expected outcomes – and to intervene when there’s not.
- Alignment is the goal. When the logic model guides the activities and outcomes, program narrative, and the choice of performance measures, and all of this informs the work plan, it’s easier to collect and use the results in a way that matters.
Disclaimer: The opinions and reflections expressed in this blog post are those of the author. The findings and conclusions in this report are those of the author(s) and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
The American Evaluation Association is celebrating the 20th anniversary of the CDC Framework for Program Evaluation in Public Health, where authors from the Centers for Disease Control and Prevention (CDC) offer some history, lessons learned, resources, and thoughts about applied evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.