My name is Lori Wingate and I am Director of Research at The Evaluation Center at Western Michigan University. I also lead EvaluATE, the evaluation resource center for the National Science Foundation’s Advanced Technological Education (ATE) program.
NSF established the ATE program in response to the Scientific and Advanced-Technology Act of 1992, which called for “a national advanced technician training program, utilizing the resources of the nation’s 2-year associate-degree-granting colleges.” ATE’s Congressional origin, characterization as a training (not research) program, and focus on 2-year colleges sets it apart from other NSF programs. Research is not the driving force of the program—it existed for 10 years before inviting proposals for research.
Since 2003, Targeted Research on Technician Education has been one of several ATE program tracks. Anecdotally, I know the program has found it challenging to get competitive research proposals. Common problems include university-based researchers treating the 2-year colleges as “guinea pigs” on which to try out their ideas, and 2-year faculty being short on research expertise.
While few of ATE’s ~250 projects are targeted research, all must be evaluated. NSF underscored the importance of evaluation when it began supporting the Evaluation Resource Center in 2008. Since 2010, the program has required that proposal budgets include funds for independent evaluators.
At the 2014 ATE PI conference, I moderated a session on ATE research and evaluation in which the Common Guidelines for Education Research and Development figured prominently. These guidelines were developed by NSF and the Institute of Education Sciences as a step toward “improving the quality, coherence, and pace of knowledge development in [STEM] education,” but some participants questioned their relevance to the ATE program. Recent evidence suggests more education is needed. While just 7 of 202 respondents to the 2016 survey of ATE PIs identified their projects as “targeted research,” 58 spent some of their budgets on research activities. Of those, almost half had either never heard of the Common Guidelines (21%) or had heard of but hadn’t read them (28%). I sense that PIs based at 2-year colleges may see the growing concern with research as a threat to the program’s historic focus on training technicians. They seemed to have embraced evaluation, but may not be sold on research.
- The time is ripe for evaluators with strong research skills to collaborate with ATE PIs on research.
- Evaluation results (project-specific knowledge) may serve as a foundation for future research (generalizable knowledge), thus connecting evaluation to research.
- Learn about the Common Guidelines. For a shortcut, see EvaluATE’s checklist versions.
- Learn more about evaluation and research in the ATE context—see EvaluATE’s webinar on Evaluation and Research in the ATE Program.
The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.