My name is Jim Van Haneghan. I am a Professor in the Department of Professional Studies at the University of South Alabama and Past President of the Consortium for Research on Educational Assessment and Teaching Effectiveness (CREATE). CREATE is an organization focused on both educational assessment and educational program evaluation in the service of effective teaching and learning (createconference.org). Our group brings together practitioners, evaluators, and researchers for our annual conference (October 5-7, 2017, Virginia Beach, VA). One of our main concerns has been on the consequential validity of educational policies, classroom assessment practices, organizational evaluation, and program evaluation evidence. This is especially important in the dynamic times we work in today where policy changes can alter the potential impact of a program and shift the nature of evaluation activity. The recent change in administration and in the Department of Education may require educational evaluators to be facile in adapting their evaluations to potentially radical changes. Hence, my goal in this post is to provide some tips for navigating the educational evaluation landscape over the next few years.
Hot Tips: For Navigating the Shifting Sands of Educational Policies and practices:
- Pay closer attention to contextual and system factors in evaluation work. Contextual analyses can call attention to potential issues that may cloud the interpretation of evaluation results. For example, when No Child Left Behind was implemented, a project I was evaluating focusing on a cognitive approach to teaching elementary arithmetic was changed. Instead of the trainers and coaches being able to focus on the intended program, their focus shifted to the specifics of how to answer questions on standardized tests. The new policy changed the focus from the intended program to a focus on testing. This problem of “initiative clash” has shown up many times over my career as an evaluator.
- Be vigilant of unintended consequences of programs and policies. Often there are unintended consequences of programs or policies. Some can be anticipated, whereas others cannot.
Rad Resource: Jonathan Morell’s book Evaluation in the Face of Uncertainty provides a number of heuristics that can help evaluators anticipate and design their evaluations to address unintended consequences.
- Revisit and Refresh your knowledge of the Program Evaluation Standards
In an era of “Fake news” and the disdain for data, evaluators need to ensure that stakeholder interests are considered, that the data are valid and reliable, that the evaluation has utility in making decisions about and improving the program, and that an honest accounting of program successes and failures has been included. The mentality of believing only “winning’ and positive results should be shared makes it difficult to improve programs or weed out weaker ones.
Rad Resources: The Program Evaluation Standards and AEA’s Guiding Principles for Evaluators.
- Enhance efforts toward inclusion of stakeholders, particularly those of traditionally poorly served groups. Methods and approaches that take into account the perspectives of less empowered groups can help support equity and social justice in the context of educational policies and programs.
The American Evaluation Association is celebrating Consortium for Research on Educational Assessment and Teaching (CREATE) week. The contributions all this week to aea365 come from members of CREATE. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.