We are Lori Wingate and Kelly Robertson of The Evaluation Center at Western Michigan University (WMU) and Michael FitzGerald and Lana Rucks of The Rucks Group (TRG).
We collaborated on a study of the impact of the evaluation capacity-strengthening efforts of EvaluATE, the evaluation hub for the National Science Foundation’s Advanced Technological Education program. In this post, we explain why we chose to examine evaluation plans in this research on evaluation study and share some tips for accessing NSF proposals spanning 14 years.
Background: The “Why”
Studies of evaluation capacity tend to rely on participants’ self-reports of changes in knowledge and practice. We wanted to find out about changes in practice over time, without relying on self-reported data. Here’s why we decided to examine evaluation plans as opposed to other artifacts of practice:
- Creating a good evaluation plan builds on several other dimensions of evaluation-related knowledge, skills, and attitudes. For example, to develop a sound evaluation plan, a project team has to understand the pros and cons of various data collection methods; be able to develop an evaluation budget; and value evaluation as worthy of time and effort¾all potential indicators of overall evaluation capacity.
- Relative to other artifacts, evaluation plans are easy to compare over time and across projects. While a project may generate multiple reports that serve different purposes, each project has just one evaluation plan at the proposal stage. And, unlike reports and other artifacts, evaluation plans included in NSF funding proosals are similar in length, content, and purpose.
The sampling frame for our study included 877 proposals funded between 2004 and 2017. From this set, we randomly selected 280 (stratified based on the years in which they were funded). It took many months, but ultimately we obtained 169 proposals spanning more than a decade! Here’s what we learned:
- Be persistent: We reached out to representatives of each selected project multiple times via written letters, emails, and phone calls from representatives of the study team.
- Make participation easy: Sometimes project representatives weren’t sure how to access their past proposals. We were ready with clear instructions on how to access their documents through NSF’s proposal submission system.
- Have a backup plan: When a project’s contact person declined to participate or could not be reached, we used random selection to replace their case with another.
- Use all options to access data: As a last resort, primarily for older proposals that we couldn’t otherwise access, we used Freedom of Information Act (FOIA) requests. Through FOIA, we obtained 34 more proposals to complete the sample. In the process, we learned a lot about FOIA’s perks and pitfalls.
We share more about this RoE study here and in a forthcoming American Journal of Evaluation article. Watch the EvaluATE site for updates on our follow-up study, now in progress, using more recent proposals to gauge the impact of changes in NSF proposal guidelines.
The American Evaluation Association is hosting Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to AEA365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.