AEA365 | A Tip-a-Day by and for Evaluators

TAG | Implementation Fidelity

Hi! We are Monica Oliver and Krista Collins, evaluation practitioners in Atlanta, GA. Through our extensive work with the US Department of Education (e.g. Investing in Innovation (i3), Race to the Top-District), we have constructed implementation fidelity frameworks for several complex, multi-level education programs.   At the past two AEA annual conferences we, along with our colleagues, presented a step-by-step process for working with stakeholders to compute a fidelity index, or an overall summative score that assesses the extent to which the program in reality aligns to the program in theory. We demonstrated how to work with program staff in various contexts and stages of program development to identify the program’s core components, select quantifiable fidelity criteria and compute fidelity scores to provide program staff with concrete information to guide implementation decisions.

We’ve expanded this conversation into an interactive half day workshop at this year’s conference. Our goal is for participants to walk away with the following information:

  • How measuring implementation fidelity can foster thorough process evaluation and inform programmatic decision-making
  • What a fidelity index is and how it can dovetail with an impact study
  • How to construct a fidelity index, including engaging stakeholders in the process
  • How to compute a fidelity index once it is constructed and data is collected
  • How to interpret fidelity indices and utilize them for program improvement, and
  • How fidelity indices need to be modified as programs develop and age.

Cool Trick: Approach fidelity assessment by first identifying your program’s key direct service components. This is harder than it sounds: not every program activity is a direct service, and not every component is key. Naming the key service components of your program at the outset will give you a running start toward assembling a comprehensive fidelity index.

Rad Resource: Look for our presentation materials in the AEA Public Library to get ready for the workshop.

  • 2013: Computing a Fidelity Index on a Program with Multiple Components
  • 2014: Assessing Program Fidelity across Multiple Contexts: The Fidelity Index, Part II

Rad Resource: The David P. Weikart Center for Youth Program Quality conducted a Youth Program Quality Intervention Study to understand best practices in youth program implementation. They describe a detailed institutional process for assessing implementation fidelity within an after-school setting to validate an effective and sustainable intervention model.

Rad Resource: Use this Tiered Fidelity Inventory developed by the U.S. Department of Education’s Office of Special Education Programs Technical Assistance Center developed to measure school-wide Positive Behavioral Intervention and Supports.

Want to learn more? Register for Creating an Index for Measuring Fidelity of Implementation at Evaluation 2015 in Chicago, IL.

This week, we’re featuring posts by people who will be presenting Professional Development workshops at Evaluation 2015 in Chicago, IL. Click here for a complete listing of Professional Development workshops offered at Evaluation 2015. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

My name is Robert McCowen and I am a doctoral fellow in Western Michigan University’s Interdisciplinary Ph.D. in Evaluation. I served as a session scribe at Evaluation 2010, and attended session number 606, Fidelity of Program Implementation in Educational Evaluations. I attended the session because my evaluation interests focus on education and assessment, and a recent research project with colleagues has involved overcoming—not always successfully—several obstacles to treatment fidelity.

Lessons Learned: Hendrick Ruitman of Cobblestone Applied Research and Evaluation, Inc., offered tips for evaluators who are attempting to ensure fidelity of implementation:

  • Manage the expectations of those responsible for implementation. If stakeholders expect a magic bullet, they’ll surely be disheartened.
  • Standardize training for implementers, and when possible make sure the trainers can discuss evaluation practices during training. This helps keep everyone on the same page—and makes evaluation a routine part of the program or treatment, rather than a later imposition.
  • Make an effort to keep implementation guidelines handy for the people who have to use them, and make the guidelines noticeable. A laminated sheet of hot pink paper is easier to find and reference than a pamphlet or notebook.
  • Develop an accurate way to track fidelity before the evaluation begins. Ad hoc methods work exactly as well as any experienced evaluator would expect!
  • Including fidelity of implementation in your evaluation design can help highlight failures and areas for improvement, and gives a great deal of information and control to stakeholders.

Hot Tip: Two authors have written about measuring fidelity on aea365. See their posts for more on this topic:

At AEA’s 2010 Annual Conference, session scribes took notes at over 30 sessions and we’ll be sharing their work throughout the winter on aea365. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

Hello! My name is Sudharshan Seshadri and I am currently pursuing my Masters degree in Professional Studies specializing in Humanitarian Services Administration.

I started to realize that “data” is being considered as a most promising feature to understand the activity of evaluation. To abstract the data needs, I believe as an evaluator, we should be conscious to explore the resources available in all forms of ubiquitous information.

I would like to share a few resources that are promising to beginners in the conduct of evaluation. For the purpose of ease of use, I shall classify the resources under three headings:

Rad Resources for Program Planning

1.      The Ohio State Evaluation Bulletin, Extension – A systemic approach to design and plan program evaluations. (http://ohioline.osu.edu/b868/)

2.      Program Planning – Program Development and Evaluation. PD & E). UWEX. (http://www.uwex.edu/ces/pdande/planning/index.html)

3. Planning a Program Evaluation: Worksheet (Co-operative Extension)

(http://learningstore.uwex.edu/assets/pdfs/G3658-1W.PDF)

4.      Evaluation design checklist, Daniel L.Stufflebeam, The Evaluation Centre,WesternMichiganUniversity. (http://www.wmich.edu/evalctr/checklists/)

5.      Key Evaluation Checklist (KEC), Michael Scriven. (https://communities.usaidallnet.gov/fa/system/files/Key+Evaluation+Checklist.pdf)

Rad Resources for Program Implementation, Monitoring, and Delivery

1.      W.K.Kellogg Foundation. Evaluation Handbook. (http://www.wkkf.org/knowledge-center/resources/2010/W-K-Kellogg-Foundation-Evaluation-Handbook.aspx)

2.      Program Manager’s Planning, Monitoring and Evaluation tool kit. Division for Oversight Services. Tool number 5. (http://www.unfpa.org/monitoring/toolkit.htm)

3.      Evaluation Models. View Points on Educational and Human Services Evaluation. Second Edition. Edited by Daniel.L. Stufflebeam, George.F. Madaus & Thomas Kellaghan. (http://www.unssc.org/web/programmes/LS/unep-unssc-precourse-material/7_eVALUATIONl%20Models.pdf)

Rad Resources for Program Utilization

1. Utilization – Focused Evaluation. Michael.Q. Patton. Fourth Edition. Sage Publications.

2.      Independent Evaluation Group. (IEG) The World Bank Group. Improving development results through excellence in evaluation. (http://www.worldbank.org/oed/)

3.      My M & E – A platform for sharing knowledge and practice amongst M & E practitioners worldwide. (www.mymande.org)

4. “Evaluate”, Evaluation centre operated by Western Michigan University, Specializing in National Science Foundation (NSF) Evaluations. (www.evalu-ate.org)

5. United Kingdom Evaluation Society.(UKES) Resources/Evaluation Glossary/ (http://www.evaluation.org.uk/resources/glossary.aspx)

Lessons Learned: Always initiate the search for data needs. In the information age, we have plethora of evaluation services in execution all over the world. Data acts a gateway to useful and significant research practices carried out in the profession of evaluation. Clearly, I accord benchmarking as an outcome of consistent resource search and utilization.

Hot Tip #1: How long can you stare at the Google search engine screen for your data needs? Expand your search through a multitude of web resources.

Hot Tip #2: Use networking to get instant responses to your queries. It allows you to create a new dimension for your learning and practice methods. For example, I created a separate page named “The Evaluation Library” for my books, references and tools in Facebook that I use frequently in the evaluation context.

Hot Tip #3: Ease of data access penetrates your interest to dig deeper. Stack or list all your resources in a platform that you visit frequently.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · ·

Archives

To top