YFE TIG Week: Purple slushy data points: lessons learned from program evaluation in high school settings by Scott Mengebier

Hi, I’m Scott Mengebier and I work with an Evaluation team for a national foundation.

For the past year, I have been leading our program evaluation in research compliance nationally, and driving data collection on a regional level for our largest project: measuring the impact of a social-emotional learning intervention on high schoolers’ academic performance and behavior.

We entered high schools as a first-year program in a large Midwestern urban school district where students had experienced trauma, suffered from chronic levels of stress, and were prone to mistrust new and potentially temporary volunteers. In these dynamic applied settings, we set out to test students’ executive function using objective performance measures on the NIH Toolbox administered on iPads to increase reliability and confidentiality.

Lessons Learned:

  1. Relationships are everything – cooperation follows trust, which can only be earned.

Start at the top: students often look to trusted adults (but not necessarily those in charge) for cues on how to react to our program and research.

We invested time and energy to befriend these key allies and engaged them to help promote our program’s value throughout the community. Their support greatly improved recruitment and retention throughout the testing period.

  1. Keep it short, keep it real – from the moment you start talking, you have 45 seconds.

We packaged essential information into quick, memorable nuggets that aligned the intellectual benefits from their participation (“get to know your flow”) with more practical rewards (“bring that form back, get some hot chips”).

  1. Come for the slushies, stay for the stories.

Short-term gains from food incentives can drive up initial interest and participation, but it can also become saturated and collapse over time.

Purple slushies were popular in bringing participants in for recruitment, but promises of pizza parties did not guarantee students would turn up for testing. This led to frustration (a.k.a. stress-eating pizza) and inefficient use of resources (well – almost), whereas long-term relationship building yielded higher retention in our evaluation.

Hot Tips:

  1. Get a copy of the class rosters: knowing (and remembering!) students’ names when you wish to recruit them helps build rapport and develop those key relationships.
  2. Use an ‘exit ticket:’ checking students’ understanding through rapid-fire questions will reinforce the essential takeaways, especially for those students who may not have been listening until you mentioned ‘hot chips.’
  3. Keep incentives small, but be consistent: small rewards like food can help build initial interest but they are not as important as delivering on any promise you make, especially for long-term engagement and continued attendance.

Capturing data is a thrilling and challenging part of the evaluation process, but with youth, each potential data point is built on trust and some creative thinking.

The American Evaluation Association is celebrating Youth Focused Evaluation (YFE) TIG Week with our colleagues in the YFE Topical Interest Group. The contributions all this week to aea365 come from our YFE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org.

1 thought on “YFE TIG Week: Purple slushy data points: lessons learned from program evaluation in high school settings by Scott Mengebier”

  1. Reisha Williams

    Hi Scott,

    Thank you so much for sharing your experiences! As a high school teacher myself and a Master’s student learning about program inquiry and evaluation, I connected with your article on two different levels.

    I found your insights regarding what helps adults connect with students to be very accurate. Relationship-building is key, but I imagine that it is far more challenging as an ‘outsider’ that is there for a short amount of time as opposed to a teacher that interacts with the students every day. Knowing students’ names is a great tip! I can imagine some of my students responding with a judgmental “How do you know know my name?” quip, but secretly feeling flattered and intrigued. And in terms of food incentives, you hit the nail on the head. I find that they are extremely responsive and engaged when candy, for example, is brought out on occasion, but they lose interest and it doesn’t inspire long-term participation.

    On a more academic side of things, we recently explored the topic of evaluation use in one of my Master’s courses, and I was especially interested in how relationship-building and stakeholder involvement plays a role. Shulha and Cousins argue that evaluators require strong interpersonal skills to build relationships and reduce stakeholders’ skeptical views on their presence (1997). As program evaluation can often appear very logical and mechanical, I think it is important to reflect upon the importance of communication and people skills as well.

    Thanks again for sharing your thoughts!

    Shulha, L., & Cousins, B. (1997). Evaluation use: Theory, research and practice since 1986. Evaluation Practice, 18, 195-208.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.