Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
Greetings from the Research, Evaluation, and Measurement (REM) Center in the University of South Carolina’s (USC) College of Education (COE). We’re Bryanna Montpeirous and Rachel Garrison, a Research Scientist and Graduate Research Assistant who are part of the evaluation team for CarolinaTIP – a university-based induction program for novice teachers from the university’s educator preparation program. Recently, we presented at the AEA Conference about how we leveraged evaluation data in the development of an implementation fidelity tool (IFT) that supports the program’s expansion. Today, we want to provide a brief overview of our process and share some key takeaways for evaluators engaged in developing tools to measure implementation.
As we approached our evaluation’s sixth year, we needed to measure the implementation fidelity of CarolinaTIP’s coaching model. This meant creating a tool that would define the model while allowing us to collect data without causing disruption to the program. To faithfully articulate the coaching model, we gathered extensive data through observation and conversations with the program team and then coded the data around key coaching concepts from past evaluation reports. We used the codes to construct the domains and indicators for our tool and relied on our existing data to help us establish indicators that clearly defined that coaching model.
After creating the tool, we reviewed it internally, made changes based on the feedback, and presented it to the CarolinaTIP team. This initial presentation marked the start of an iterative review process aimed at refining the tool. In each cycle, we’d review the tool with the TIP team, they would use it, and we would gather feedback to make further improvements. This ongoing process ensured both the evaluators and program team were on the same page, the tool was functional, and the data collected would be valuable for both programmatic and evaluative purposes. The iterative process concluded when both teams agreed the tool was sufficiently refined and ready for piloting. We are now in the process of piloting the tool.
Here are our takeaways for evaluators who are considering implementation fidelity measurement as part of their evaluation plan.
- Assess readiness. Ensure the program is at an appropriate phase of development and that you have adequate data to support tool development.
- Don’t rush. Invest time to establish a strong foundation while staying true to your data.
- Collaborate. Engage with a team; developing a tool alone can be challenging.
- Stay focused. Remain cognizant of the tool’s purpose and intended use.
- Optimize purpose. Ensure the tool supports evaluation purposes and is also useful for the program team.
- Prepare for education. Be ready to explain implementation fidelity measurement and why it is important; anticipate some resistance.
- Scaffold the tool. Create a structured framework that facilitates progress towards the end-product, allowing the program team to utilize various levels along the way.
- Plan for refinement. Engage in an iterative process with both your team and the program team, including exploration and pilot phases.
- Welcome feedback. Embrace feedback as a valuable tool for growth and improvement, recognizing its importance in refining the implementation tool.
- Balance revisions. Recognize when to accept feedback/make revisions and when to stand firm on evaluative decisions.
We hope these insights spark ideas for your own evaluation work and encourage you to embrace the process with patience and collaboration. Feel free to reach out if you have questions or want to share your experiences!
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.