My name is Jennifer Hamilton, and I am the Vice President for Education and Child Development at NORC at the University of Chicago and Past President of the Eastern Evaluation Research Society (EERS), an AEA affiliate. I am also a methodologist, who sometimes tweets about evaluation (@limeygrl).
Given the increasing necessity for evaluators to adapt to our changing world (the theme of the 2019 EERS Spring conference), I wanted to talk here about adapting our methodological toolkit to appropriately reflect the context of the program under study.
Right now, there are many changes occurring in the field of evaluation, including important efforts to make our findings more interpretable to our audiences (shout out to the good work of Stephanie Evergreen and Ann Emery). But in terms of methodology, I want to talk about being adaptive in the timing of evaluations. A few years ago, Jill Feldman and I talked about the importance of matching the methodology of an evaluation to the life-cycle of the program (Hamilton and Feldman, 2014). But there is an additional aspect of timing that we did not talk about. It’s related to the “implementation dip” (Fullan, 2001). The implementation dip is literally a dip in performance and confidence as one encounters an innovation that requires new skills and new understandings. In education, you see this when teachers are asked to significantly change how they provide instruction.
When the implementation dip occurs as part of an impact evaluation, we often find no effect or sometimes even a negative effect on student outcomes. I’m going to call this the ‘Voldemort effect’ (thanks to Eric Hedberg who originated this phrase) – because while we fear it, we also don’t like to talk about it. And it has killed many a promising program.
Lessons Learned:
The advice is therefore to openly acknowledge the Voldemort effect in the planning stage and adapt accordingly. The design can be adapted in a couple ways. First, we can incorporate measures of implementation fidelity beginning on day one, rapidly cycling implementation data back to the teachers (an efficacy design). This information can encourage better fidelity, minimizing the implementation dip. Second, we can plan to collect the data for our primary confirmatory outcomes in year 2 (where possible), rather than hanging our hat on first year findings. This may be more costly, and therefore more difficult to pitch, but is better than null (or negative findings).
Rad Resource:
The implementation dip is discussed often in business management consulting, and there are a number of resources available that also make sense for education. A nice description of the issue and some solutions is provided here.
Hot Tips:
Follow or live tweet the Spring conference using #EERS19 – it’s a good way to meet people!
The American Evaluation Association is celebrating Eastern Evaluation Research Society (EERS) Affiliate Week. The contributions all this week to aea365 come from EERS members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hi, great post! I’m also interested in the Hamilton & Feldman (2014) work, as cited in your article. Would you be willing me to point me to where i can find it? Thank you!
Jessica – DM me on LinkedIn with your e-mail address and I’ll forward you a copy of Hamilton and Feldman 2014.