We’re Kelly Robertson and Lori Wingate from The Evaluation Center at Western Michigan University. We are part of EvaluATE, the evaluation hub for the National Science Foundation’s Advanced Technological Education (ATE) program.
Our recent research on evaluation (RoE) study involved collecting data from participants weekly and monthly over the course of a year, and we’re here to share some tips based on that experience. First, some background on the study: Our purpose was to develop and validate a set of essential evaluation tasks to guide EvaluATE’s capacity-strengthening work. To do this, we needed a detailed picture of what ATE evaluation entails. So we started with a task-tracking study: We asked seven ATE project evaluators and seven principal investigators (PIs) to record their evaluation-related tasks in an online form for one year. Evaluators recorded their tasks weekly; PIs did so monthly.
No one dropped out of the study and all participants responded to all data requests (i.e., 52 entries for evaluators and 12 for PIs; plus a one-on-one interview).
Here are some of the strategies that contributed to the success of our long-term study:
- Compensate participants: We paid evaluators $2,000 for recording their evaluation tasks weekly for one year. We paid PIs $1,200 for recording their tasks monthly. We sent payments to evaluators and PIs every three months once they submitted all requested data for that period. We also offered to help participants with any evaluation-related challenges they faced, although no one utilized this option.
- Communicate regularly with participants: We emailed reminders to participants before their task-tracking submissions were due (i.e., weekly for evaluators and monthly for PIs). The reminders were friendly—each week we included a different fun meme. We sent a follow-up email if participants did not submit their information on time, and we were always flexible if they needed more time to submit their information. We reviewed their submissions regularly and asked follow-up questions when entries were vague.
- Be clear about expectations: At the beginning of the study, we held an orientation webinar to inform participants about how to participate in the study and familiarize them with the online task-tracking form.
- Plan for participant drop-out: We safeguarded our data collection efforts by asking “extra” individuals to participate in the study, assuming that some would drop out before the year was over. Our goal was for five evaluators and five PIs to provide us with a full year’s worth of data; to achieve this, we asked seven evaluators and seven PIs to participate, assuming that two from each group may drop out.
We engaged a relatively small number of participants in our study, and some of these strategies may not be feasible when working with a larger group of participants. We hope these tips – or your own version of them – help you sustain involvement among your research or evaluation participants.
The American Evaluation Association is hosting Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to AEA365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.