Climate Ed Eval Week: Dan Zalles on Maintaining Flexibility in Evaluating Outcomes Across Varying Implementations

I’m Dan Zalles, Senior Educational Researcher at SRI International. Have you ever tried evaluating whether an innovative classroom intervention is leading to greater student learning outcomes, and found either that many teachers dropped out of the project or learning gains failed to materialize?

It’s easy to conceptualize a centrally developed classroom innovation for students as a feasibly-implementable effort, and imagine that the teacher will merely be a faithful and devoted delivery vehicle. Unfortunately (or maybe fortunately) there is much research literature pointing out that teachers are much more than that. You have to win their hearts and minds to stick with the innovation. That requires thinking about the innovation’s essentials as opposed to its “adaptables.” As principal investigator of NASA and NSF-funded teacher professional development and classroom implementation projects, I’ve learned to be careful about differentiating the two (which is another way of saying “be careful how you pick your battles”).

Lesson Learned: In my two projects, STORE and DICCE, the core innovation is teacher use of certain geospatial scientific data sets. All else is adaptable. Early in the projects, I could see the value of this approach. I brought science teachers together from different schools, teaching different grade levels and different courses. I showed them core lessons that my central team developed that illustrate uses of the data sets. Their first reaction was “That’s great, but this is what I would do differently.” Of course, they would disagree with each other. One teacher even disagreed with herself, saying that the adaptations she would need to make for her lower-level introductory biology class would have to be quite different than for her AP biology class, which had a much more crowded curriculum. I was happy that I could respond by saying, “Your disagreements are fine. You don’t have to reach consensus and you don’t have to implement these lessons as written. You can adapt them, or pick and choose from them, as long as you use at least some of the data.”

Hot Tip: If you’re an evaluator trying to determine effectiveness, you are of course interested in your ability to generalize across cases. Fortunately, you can still do that by rethinking your theory of change. Decide what is the core innovation and measure accordingly, looking at relationships between different teacher adaptation paths and student outcomes. Then think carefully about what characterizes feasibly measurable outcome metrics. For example, in the STORE project, all students are asked pre-post open-ended questions about key concepts that the data sets illustrate. Because the assessments are open-ended, you can identify gains by scoring on broad constructs such as depth of thinking. Then, associate your findings with the various adaptations and teacher implementations.

The American Evaluation Association is celebrating Climate Education Evaluators week. The contributions all this week to aea365 come from members who work in a Tri-Agency Climate Education Evaluators group. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

1 thought on “Climate Ed Eval Week: Dan Zalles on Maintaining Flexibility in Evaluating Outcomes Across Varying Implementations”

  1. Dear Mr. Zalles,

    Thank you for highlighting an excellent point regarding evaluation, particularly when it comes to evaluating how innovations in teaching are implemented in diverse classrooms. I can relate to your post on a variety of levels. I am a high school Biology teacher, so I perfectly understand the situation from the teacher’s perspective. Each of my classes has unique needs depending on the students within them and the course content. If I were not able to adapt resources, I would feel stifled in my ability to do my job. At the moment, I am also pursing my Masters of Education, and one of my courses focuses on Program Evaluation. Therefore, I can also appreciate the significant problems you would encounter trying to evaluate the effectiveness of an innovation that is implemented differently in each case.

    I think you came up with an excellent solution by focusing on a core innovation and measuring a broadly-defined output. However, it sounds like you might not be able to determine what the evaluation would really be measuring until you received the data from classes. Is that a fair assessment? If the teachers used similar approaches, you could make more generalizations. If they use different approaches, as is likely, would you the focus of the evaluation switch to examine which approach led to greater student understanding? It seems to me that you would be evaluating two different things at once: how the project contributed to learning in a broad sense, and which approaches allowed students to better utilize or understand the project.

    As someone who is a beginner in the field of evaluation, I imagine this would be extremely challenging data to analyze. I have a few questions I was wondering if you could explain. How did you control for the differences between classes of students? In your example you discussed how you could use a broad construct such as “depth of thinking” with open ended questions. A senior-level Science student is likely to have a deeper response than a grade 9 Science class. Did you control for this by gathering other data about the classes to determine their individual baseline? I am still quite new to evaluation, but as a Science teacher I do understand the need for controlled experimentation.

    I am also very curious about the different approaches that teachers used to incorporate real data in their classes. One of the other courses I am currently taking focuses on “Innovation in Teaching and Learning” and I am working on how to incorporate more real-world connections with my Biology classes. The projects you describe sound like an excellent opportunity to do this! Are the projects still in operation? Do you consult with classrooms outside of the US? I am currently in Canada, but often teach internationally.

    Thank you so much for your post. I found it quite fascinating from multiple perspectives!


Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.