I’m Dan Zalles, Senior Educational Researcher at SRI International. Have you ever tried evaluating whether an innovative classroom intervention is leading to greater student learning outcomes, and found either that many teachers dropped out of the project or learning gains failed to materialize?
It’s easy to conceptualize a centrally developed classroom innovation for students as a feasibly-implementable effort, and imagine that the teacher will merely be a faithful and devoted delivery vehicle. Unfortunately (or maybe fortunately) there is much research literature pointing out that teachers are much more than that. You have to win their hearts and minds to stick with the innovation. That requires thinking about the innovation’s essentials as opposed to its “adaptables.” As principal investigator of NASA and NSF-funded teacher professional development and classroom implementation projects, I’ve learned to be careful about differentiating the two (which is another way of saying “be careful how you pick your battles”).
Lesson Learned: In my two projects, STORE and DICCE, the core innovation is teacher use of certain geospatial scientific data sets. All else is adaptable. Early in the projects, I could see the value of this approach. I brought science teachers together from different schools, teaching different grade levels and different courses. I showed them core lessons that my central team developed that illustrate uses of the data sets. Their first reaction was “That’s great, but this is what I would do differently.” Of course, they would disagree with each other. One teacher even disagreed with herself, saying that the adaptations she would need to make for her lower-level introductory biology class would have to be quite different than for her AP biology class, which had a much more crowded curriculum. I was happy that I could respond by saying, “Your disagreements are fine. You don’t have to reach consensus and you don’t have to implement these lessons as written. You can adapt them, or pick and choose from them, as long as you use at least some of the data.”
Hot Tip: If you’re an evaluator trying to determine effectiveness, you are of course interested in your ability to generalize across cases. Fortunately, you can still do that by rethinking your theory of change. Decide what is the core innovation and measure accordingly, looking at relationships between different teacher adaptation paths and student outcomes. Then think carefully about what characterizes feasibly measurable outcome metrics. For example, in the STORE project, all students are asked pre-post open-ended questions about key concepts that the data sets illustrate. Because the assessments are open-ended, you can identify gains by scoring on broad constructs such as depth of thinking. Then, associate your findings with the various adaptations and teacher implementations.
The American Evaluation Association is celebrating Climate Education Evaluators week. The contributions all this week to aea365 come from members who work in a Tri-Agency Climate Education Evaluators group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.