AEA365 | A Tip-a-Day by and for Evaluators

TAG | retrospective pretest

Hello! We are Debi Lang and Judy Savageau from the Center for Health Policy and Research at UMass Medical School. Earlier this year, Debi published a post on how program-specific learning objectives can help measure student learning to demonstrate program impact. Today’s post shows how to measure whether training or professional development programs are meeting learning objectives using a retrospective pre-post methodology.

Start at the End!

Using a traditional pre-and-then-post approach to measure student learning can suffer when students over or underestimate their knowledge/ability on the pre-test because we often “don’t know what we don’t know.” Therefore, the difference between pre and post-program data may inaccurately reflect the true impact of the program.

Instead of collecting data at the beginning and end of the program, the retrospective pre-post approach measures students’ learning only at the end by asking them to self-assess what they know from two viewpoints – BEFORE and AFTER participating. The responses can be compared to show changes in knowledge/skills.

Below is an example of the retrospective pre-post design excerpted from the evaluation of a class on American Sign Language (ASL) interpreting in health care settings. Students are self-assessing their knowledge based on statements reflecting the learning objectives.

Hot Tips:

Here are some recommendations for designing a retrospective pre-post survey (as well as other training evaluation surveys):

  • Write a brief statement at the top of the form stating the purpose of the evaluation along with general instructions on when, how and to whom to return completed forms, a confidentiality statement, and how responses will be used.
  • Include space at the end to ask for comments on what worked and suggestions for improvement.
  • Since many learners may not be familiar with the retrospective approach, use plain language so instructions are easily understood. This can be especially important for youth programs and when written or verbal instruction is not given in a student’s native language.

And Now for the Statistics…

Generally, a simple paired t-test is used to compare mean pre and post scores. However, if sample sizes are too small such that the data are not normally distributed, the non-parametric equivalent of the paired t-test would typically be computed. To analyze the data from the ASL class, with a sample size of 12, we used the Wilcoxon signed-rank test. Below are the average class scores for the 3 measures.

Lessons Learned:

Using a retrospective pre-post design allows for analysis of anonymous paired data, whereas the traditional pre-post approach requires linking the paired data to each student, which may compromise anonymity.

If follow-up data is collected (e.g., 6 months post-training) to measure sustainability of knowledge, additional analytic testing would require a plan to merge the two data files by some type of ID number.

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hi! My name is Catherine Callow-Heusser, Ph.D., President of EndVision Research and Evaluation. I served as the evaluator of a 5-year Office of Special Education Programs (OSEP) funded personnel preparation grant. The project trained two cohorts of graduate students, each completing a 2-year Master’s level program. When the grant was funded, our first task was to comb the research literature and policy statements to identify the competencies needed for graduates of the program. By the time this was completed, the first cohort of graduate students had nearly completed their first semester of study.

As those students graduated and the next cohort selected to begin the program, we administered a self-report measure of knowledge, skills and dispositions based on the competencies.  For the first cohort, this served as a retrospective pretest as well as a posttest.  For the second cohort, this assessment served as a pretest, and the same survey was administered as a posttest two years later as they graduated. The timeline is shown below.

callow-heusser-timeline

Retrospective pretest and pretest averages across competency categories were quite similar, as were posttest averages. Furthermore, overall pretest averages were 1.23 (standard deviation, sd = 0.40) and 1.35 (sd = 0.47), respectively. Item-level analysis indicated the pretest item averages were strongly and statistically significantly correlated (Pearson-r = 0.79, p < 0.01), and that the Hedge’s g measure of difference between pretest averages for cohorts 1 and 2 was only 0.23, whereas the Hedge’s g measure of difference from pre- to posttest for the two cohorts was 5.3 and 5.6, respectively.

callow-heusser-chart

Rad Resources: There are many publications that provide evidence supporting retrospective surveys, describe the pitfalls, and suggest ways to use them. Here are a few:

Hot Tip #1: Too often, we as evaluators wish we’d collected potentially important baseline data. This analysis shows that given a self-report measure of knowledge and skills, a retrospective pretest provided very similar results to a pretest administered before learning when comparing two cohorts of students. When appropriate, retrospective surveys can provide worthwhile outcome data.

Hot Tip #2: Evaluation plans often evolve over the course of a project. If potentially important baseline data were not collected, consider administering a retrospective survey or self-assessment of knowledge and skills, particularly when data from additional cohorts are available for comparison.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hi!  This is Andrea Crews-Brown, Tom McKlin, and Brandi Villa with SageFox Consulting Group, a privately owned evaluation firm with offices in Amherst, MA and Atlanta, GA, and Shelly Engelman with the School District of Philadelphia. Today we’d like to share results of a recent survey analysis.

Lessons Learned: Retrospective vs. Traditional Surveying

Evaluators typically implement pre/post surveys to assess the impact a particular program had on its participants. Often, however, pre/post surveys are plagued by multiple challenges:

  1. Participants have little knowledge of the program content and thus leave many items blank.
  2. Participants complete the “pre” survey but do not submit a “post” survey; therefore, it cannot be used for comparison.
  3. Participants’ internal frames of reference change between the pre and post administrations of the survey due to the influence of the intervention. This is often called “response-shift bias.” Howard and colleagues (1979) consistently found that the intervention directly affects the self-report metric between the pre-intervention administration of the instrument and the post-intervention administration.

Retrospective surveys ask participants to compare their attitudes before the program to their attitudes at the end. The retrospective survey addresses most of the challenges that plague traditional pre/post surveys:

  1. Since the survey occurs after the course, participants are more likely to understand the survey items and, therefore, provide more accurate and consistent responses.
  2. Participants can reflect on their growth over time, giving them a more accurate view of their progression.
  3. Participants will take the survey in one sitting which means that the response are more likely to be paired.

Lesson Learned: Response Differences

To analyze response-shift bias, we compared the pre responses on traditional pre/post items measuring confidence to “pre” responses on identical items administered retrospectively on a post survey. When asked about their confidence at the beginning of the course, a mean of 4.47 was reported while on the retrospective survey a value of 3.86 was reported. The students expressed significantly less confidence on the retrospective. A Wilcoxon Signed-Rank Test was used to evaluate the difference in score reporting from traditional pre to retrospective pre. A statistically significant difference (p < .01) was found indicating that the course may have encouraged participants to recalibrate their perceptions of their own confidence.

McKlin

Rad Resource:  Howard has written several great articles on response-shift bias!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · ·

Hi! This is Shelly Engelman and Brandi Campbell with The Findings Group, LLC, a private evaluation firm in Atlanta, GA.

Evaluators typically implement pre/post surveys to assess programmatic impact on participants.  However, pre/post surveys are plagued by challenges:

1. Participants have difficulty responding to the “pre” survey items because they have little knowledge of the program content and choose to leave many items blank.

2. Participants feel overburdened with the “post” survey because they answered similar items on the “pre” survey and do not fill-out the “post” survey.

3. A participant is not present for either the “pre” or “post” survey, resulting in an incomplete data set for that individual.

4. Participants gain insights into program content and see it differently than at the beginning. Known as the Response Shift Bias, participants may overestimate their initial attitudes due to lack of knowledge at baseline; after the program, their deeper understanding affects their responses on the “post” survey.

Lesson Learned: Retrospective Results – Complete and Stable

Retrospective surveys ask participants to compare their attitudes before the program to after.  Because a participant completes a retrospective survey in one sitting, responses are more complete.  Not only is there a higher completion percentage with this method, but it also has been found to reduce the Response Shift Bias in participants.

Lesson Learned: The Utility of Retrospective Results

In several of our projects, the retrospective survey had advantages over the pre/post survey.  It yielded more complete datasets and higher response rates. On the other hand, because students complete the survey after the program, they may not accurately remember their attitudes before the program.  This is especially prevalent if the program occurs over several months.  Additionally, younger participants may have trouble navigating the retrospective survey format and may require additional assistance.

Contribute to the Practice of Retrospective Surveying

We appreciate that the evaluation community has more to learn about appropriate uses for retrospective surveys. To more fully understand the differences in true pre/post vs. retrospective pre/post approaches, The Findings Group is conducting pre surveys followed by retrospective pre/post surveys on a handful of programs.  We expect to measure the differences, if any, between the two “pre” response sets.  We invite you to do the same and share your results.  We could put together a panel presentation at AEA 2014!

Hot Tips: Implementing a Retrospective Survey

It is simple to rewrite pre-post survey items for a retrospective survey.

Pre/post survey: I am confident in my ability to solve computer science problems.

Retrospective pre-survey: Before this workshop, I was confident in my ability to solve computer science problems.

Retrospective post-survey: After this workshop, I am confident in my ability to solve computer science problems.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PK12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Archives

To top