My name is Tom Chapel, and I’m the Chief Evaluation Officer for the Centers for Disease Control and Prevention (CDC). As Dan mentioned yesterday, the 1999 publication of the CDC Framework for Program Evaluation in Public Health was the impetus for the integration of evaluation with program planning and execution at CDC. Right now we’re pausing to ask about the extent to which the Framework principles have taken hold across the Agency and how that’s paying off in the use of evaluation findings.
Several concurrent efforts are underway to answer these questions: we’ve completed detailed interviews with evaluators from 20 CDC programs, we’re conducting in-depth “use” case studies with three programs known for strong use of findings, and we’re fielding a survey with the CDC evaluation community. In all three components, we’re diving into the culture, function, and types of evaluation at play and, more importantly, looking for patterns in how that matters for evaluation use.
Practicing what we preach, we’ll use these findings to improve our evaluation work. Fortuitously, the anniversary of the Framework’s publication coincides with the passage of the Foundations of Evidence-based Policymaking Act, so these efforts will guide CDC’s implementation of key components of the Act, like upgrading our evaluation policy, exploring the development of a CDC learning agenda, and developing annual evaluation plans.
Lessons Learned:
Here are some of the themes we’ve seen over the last 20 years in how to get traction for evaluation at CDC:
- Utility is king. To be useful, evaluations have to answer questions that someone is asking and respond with actionable findings.
- Leadership buy-in is key. It matters for allocating resources and for ensuring findings turn into action.
- The right messaging is crucial. Help colleagues understand that evaluation is a philosophy, not a set of methods. It’s a commitment to continuous program improvement and funding stewardship, and so evaluation design and methods should be tailored to the information needs presented.
- When leadership and others don’t support evaluation, it often reflects a lack of knowledge, not of commitment. Evaluation competes for “head space” with research, performance measurement, quality improvement, and risk mitigation. You can either position these as part of an evaluation “extended family” or demonstrate how evaluation adds value in this mix.
Rad Resource:
Visit our website where we’ll share information about our 20th anniversary activities and what we’ve learned along the way. Activities culminate on September 16th and 17th, when we hold our 7th annual CDC Evaluation Day and honor recipients of the CDC Evaluation Awards. Use our website to easily navigate to all of our other resources that support strong program evaluation in public health.
Disclaimer: The opinions and reflections expressed in this blog post are those of the author. The findings and conclusions in this report are those of the author(s) and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
The American Evaluation Association is celebrating the 20th anniversary of the CDC Framework for Program Evaluation in Public Health, where authors from the Centers for Disease Control and Prevention (CDC) offer some history, lessons learned, resources, and thoughts about applied evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.