AEA365 | A Tip-a-Day by and for Evaluators

TAG | professional

Hello! We are Debi Lang and Judy Savageau from the Center for Health Policy and Research at UMass Medical School. Earlier this year, Debi published a post on how program-specific learning objectives can help measure student learning to demonstrate program impact. Today’s post shows how to measure whether training or professional development programs are meeting learning objectives using a retrospective pre-post methodology.

Start at the End!

Using a traditional pre-and-then-post approach to measure student learning can suffer when students over or underestimate their knowledge/ability on the pre-test because we often “don’t know what we don’t know.” Therefore, the difference between pre and post-program data may inaccurately reflect the true impact of the program.

Instead of collecting data at the beginning and end of the program, the retrospective pre-post approach measures students’ learning only at the end by asking them to self-assess what they know from two viewpoints – BEFORE and AFTER participating. The responses can be compared to show changes in knowledge/skills.

Below is an example of the retrospective pre-post design excerpted from the evaluation of a class on American Sign Language (ASL) interpreting in health care settings. Students are self-assessing their knowledge based on statements reflecting the learning objectives.

Hot Tips:

Here are some recommendations for designing a retrospective pre-post survey (as well as other training evaluation surveys):

  • Write a brief statement at the top of the form stating the purpose of the evaluation along with general instructions on when, how and to whom to return completed forms, a confidentiality statement, and how responses will be used.
  • Include space at the end to ask for comments on what worked and suggestions for improvement.
  • Since many learners may not be familiar with the retrospective approach, use plain language so instructions are easily understood. This can be especially important for youth programs and when written or verbal instruction is not given in a student’s native language.

And Now for the Statistics…

Generally, a simple paired t-test is used to compare mean pre and post scores. However, if sample sizes are too small such that the data are not normally distributed, the non-parametric equivalent of the paired t-test would typically be computed. To analyze the data from the ASL class, with a sample size of 12, we used the Wilcoxon signed-rank test. Below are the average class scores for the 3 measures.

Lessons Learned:

Using a retrospective pre-post design allows for analysis of anonymous paired data, whereas the traditional pre-post approach requires linking the paired data to each student, which may compromise anonymity.

If follow-up data is collected (e.g., 6 months post-training) to measure sustainability of knowledge, additional analytic testing would require a plan to merge the two data files by some type of ID number.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Archives

To top