Research vs Eval Week: Kirk Knestis Revisiting Innovation R&D vs. Program Evaluation: What do we know two years post-Common Guidelines for Education Research and Development?

Greetings, evaluation professionals! Kirk Knestis, CEO of Hezel Associates, back this time as guest curator of an AEA365 week revisiting challenges associated with untangling purposes and methods, between evaluation and research and development (R&D) of education innovations. While this question is being worked out in other substantive areas as well, we deal with it almost exclusively in the context of federally funded science, technology, engineering, and math (STEM) learning projects, particularly those supported by the National Science Foundation (NSF).

In the two years since I shared some initial thoughts in this forum on distinctions between “research” and “evaluation,” the NSF has updated many of its solicitations to specifically reference the then-new Common Guidelines for Education Research and Development. This is, as I understand it, part of a concerted effort to increase emphasis on research—generating findings useful beyond the interests of internal project stakeholders. In response, proposals have been written and reviewed, and some have been funded. We have worked with dozens of clients, refined practices with guidance from our institutional review board (IRB), and even engaged external evaluators ourselves when serving in the role of “research partner” for clients developing education innovations. (That was weird!) While we certainly don’t have all of the answers in the complex and changing context of grant-funded STEM education projects, we think we’ve learned a few things that might be helpful to evaluators working in this area.

Lesson Learned: This evolution is going to take time, particularly given the number of stakeholder groups involved in NSF-funded projects—program officers, researchers, proposing “principal investigators” not researchers by training, external evaluators, and perhaps most importantly the panelists who score proposals on an ad hoc basis. While the increased emphasis on research is a laudable goal—as the NSF merit criterion of furthering “Intellectual Merit”—these groups are far from consensus about terms, priorities, and appropriate study designs. On reflection, my personal enthusiasm and orthodoxy regarding the Guidelines put us far enough ahead of the implementation curve that we’ve often found ourselves struggling. The NSF education community is making progress toward higher quality research but the potential for confusion and proposal disappointment is still very real.

Hot Tip: Read the five blogs that follow. Delve more into the nuances of what my colleagues are collectively learning about how we can improve our practices in the context of evolving operational distinctions between R&D and external program evaluation of STEM education innovations. This week’s posts explore what we *think* we’re learning across three specific popular NSF education programs, in the context of IRB review of our studies, and where the importance of dissemination is concerned. I hope they are useful.

The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.