I am George Chitiyo, Professor of Educational Research and Evaluation at Tennessee Tech University. I work with teachers and other education professionals in assessing and evaluating instructional strategies, initiatives, and educational interventions and programs.
Today kicks off Teacher Appreciation Week for 2023, and on behalf of the PreK-12 Educational Evaluation TIG, I would like to acknowledge and express appreciation for the hard work that all our educators do, including assessment of learning (and assessment for learning). Without assessment, it would be difficult for educators to determine whether they are making a difference in their students’ learning, whether they are improving as facilitators of learning, and whether educational goals are being met at various levels.
For this particular blog post, I will highlight the need to look at educational programs from several angles in order to tell a more complete and compelling story from your assessment data. The lessons learned are based on work I have done with some of my colleagues, notably Dr. Lisa Zagumny, Dr. Ashley Akenson, Dr. David Larimore, and Mrs. Kinsey Simone.
Lessons Learned #1: Use multiple measures to assess learning outcomes. We all know how test scores are often hailed as the chief learning outcome in education. But wait! There has to be more. What about learners’ self-efficacy, motivation, self-confidence, higher order thinking skills, interest in the learning process, and fascination with the instructional strategy(ies), and hence engagement, among others? What about the ripple effects of the program(s)? Whenever possible, it would be advisable to consider assessing multiple outcomes, as this will shed light on your program in ways you might not have considered, and you will be able appease other (often neglected) stakeholder constituents.
Lessons Learned #2: Educator observations are an important piece of data. All anecdotal pieces of data can help corroborate the more objective assessment data you might collect about your instruction or program. An often ignored piece of data are educator observations. Those observations (which I recommend to be documented as “field notes”) often help one to glue together pieces of the puzzle and hence explain why the program did or did not work.
Lessons Learned #3: The sugar pill effect is real. In their minds, the program works! On some of the programs that we have evaluated, there may not have been statistically significant effects due to the intervention, but the participants/students/teachers were convinced the interventions worked. For example, middle school students expressed huge interest in learning using the flipped model of instruction, and their engagement was high, but there was not a significant effect on achievement when looking at test scores. Similarly, the use of educational chess in school was not associated with improved grades across all grade levels, but the students indicated they benefited academically. We shouldn’t ignore their experiences and viewpoints.
The American Evaluation Association is hosting PreK-12 Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to AEA365 come from our PreK-12 Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.