Hi! My name is Catherine Callow-Heusser, Ph.D., President of EndVision Research and Evaluation. I served as the evaluator of a 5-year Office of Special Education Programs (OSEP) funded personnel preparation grant. The project trained two cohorts of graduate students, each completing a 2-year Master’s level program. When the grant was funded, our first task was to comb the research literature and policy statements to identify the competencies needed for graduates of the program. By the time this was completed, the first cohort of graduate students had nearly completed their first semester of study.
As those students graduated and the next cohort selected to begin the program, we administered a self-report measure of knowledge, skills and dispositions based on the competencies. For the first cohort, this served as a retrospective pretest as well as a posttest. For the second cohort, this assessment served as a pretest, and the same survey was administered as a posttest two years later as they graduated. The timeline is shown below.
Retrospective pretest and pretest averages across competency categories were quite similar, as were posttest averages. Furthermore, overall pretest averages were 1.23 (standard deviation, sd = 0.40) and 1.35 (sd = 0.47), respectively. Item-level analysis indicated the pretest item averages were strongly and statistically significantly correlated (Pearson-r = 0.79, p < 0.01), and that the Hedge’s g measure of difference between pretest averages for cohorts 1 and 2 was only 0.23, whereas the Hedge’s g measure of difference from pre- to posttest for the two cohorts was 5.3 and 5.6, respectively.
Rad Resources: There are many publications that provide evidence supporting retrospective surveys, describe the pitfalls, and suggest ways to use them. Here are a few:
- Other AEA365 posts on this topic
- The Evaluation Exchange, a Periodical on the Emerging Strategies of Evaluation
Harvard Family Research Project, Harvard Graduate School of Education
The Retrospective Pretest: An Imperfect but Useful Tool, by Theodore Lamb (2005)
- Synthesis of the Literature Relative to the Retrospective Pretest Design (Klatt and Taylor-Powell, 2005)
- What’s the Difference? Post then Pre or Pre then Post (Colosi and Dunifon, 2006)
Hot Tip #1: Too often, we as evaluators wish we’d collected potentially important baseline data. This analysis shows that given a self-report measure of knowledge and skills, a retrospective pretest provided very similar results to a pretest administered before learning when comparing two cohorts of students. When appropriate, retrospective surveys can provide worthwhile outcome data.
Hot Tip #2: Evaluation plans often evolve over the course of a project. If potentially important baseline data were not collected, consider administering a retrospective survey or self-assessment of knowledge and skills, particularly when data from additional cohorts are available for comparison.
The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.