Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Ed Eval TIG Week: Catherine Callow-Heusser on Using Retrospective Assessments of Knowledge and Skills When Pretest Data Are Not Available

Hi! My name is Catherine Callow-Heusser, Ph.D., President of EndVision Research and Evaluation. I served as the evaluator of a 5-year Office of Special Education Programs (OSEP) funded personnel preparation grant. The project trained two cohorts of graduate students, each completing a 2-year Master’s level program. When the grant was funded, our first task was to comb the research literature and policy statements to identify the competencies needed for graduates of the program. By the time this was completed, the first cohort of graduate students had nearly completed their first semester of study.

As those students graduated and the next cohort selected to begin the program, we administered a self-report measure of knowledge, skills and dispositions based on the competencies.  For the first cohort, this served as a retrospective pretest as well as a posttest.  For the second cohort, this assessment served as a pretest, and the same survey was administered as a posttest two years later as they graduated. The timeline is shown below.

callow-heusser-timeline

Retrospective pretest and pretest averages across competency categories were quite similar, as were posttest averages. Furthermore, overall pretest averages were 1.23 (standard deviation, sd = 0.40) and 1.35 (sd = 0.47), respectively. Item-level analysis indicated the pretest item averages were strongly and statistically significantly correlated (Pearson-r = 0.79, p < 0.01), and that the Hedge’s g measure of difference between pretest averages for cohorts 1 and 2 was only 0.23, whereas the Hedge’s g measure of difference from pre- to posttest for the two cohorts was 5.3 and 5.6, respectively.

callow-heusser-chart

Rad Resources: There are many publications that provide evidence supporting retrospective surveys, describe the pitfalls, and suggest ways to use them. Here are a few:

Hot Tip #1: Too often, we as evaluators wish we’d collected potentially important baseline data. This analysis shows that given a self-report measure of knowledge and skills, a retrospective pretest provided very similar results to a pretest administered before learning when comparing two cohorts of students. When appropriate, retrospective surveys can provide worthwhile outcome data.

Hot Tip #2: Evaluation plans often evolve over the course of a project. If potentially important baseline data were not collected, consider administering a retrospective survey or self-assessment of knowledge and skills, particularly when data from additional cohorts are available for comparison.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

2 thoughts on “Ed Eval TIG Week: Catherine Callow-Heusser on Using Retrospective Assessments of Knowledge and Skills When Pretest Data Are Not Available”

  1. Also, sometimes the Retrospective Post and Pre test provides better data when response shift bias might be a factor. Response shift bias happens when respondent’s understanding of a concept or things like their level of trust with the person collecting the data shift over-time.

    For example, students may think they have good listening skills before a soft skills course, so they rate themselves 4 out of 5. Then they learn that their listening skills are not as good as they thought and rate themselves as a 3 out of 5. So, a pre-post test would make it look like the course decreased their soft skills, which is not the case. The Post Retrospective Pre-test would provide more accurate data, because it is conducted at one point in time, and respondents’ understanding of good listening skills is constant.

  2. Shafiullah Rasikh

    Thanks for the informative article. Here is my question: If the same test, same question and at the same time taken from Cohort 1 as pretest and post test – the results should be same for both tests. But here there is significant difference (improvement) comparing pretest and posttest of cohort 1. for example: results on “Assessment and evaluation” cohort 1 retrospective pretest = 0.9 whereas cohort 1 post test = 2.9 which shows a double increase. would you please clarify this further?

    Regards,

    Rasikh

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.