Kirk Knestis on Innovation Research and Development (R&D) vs. Program Evaluation

Kirk Knestis, here, CEO of Hezel Associates—a research and evaluation firm specializing in education innovations. Like many of you, I’ve participated in “evaluation versus research” conversations. That distinction is certainly interesting, but our work in studying science, technology, engineering, and math education (STEM) leaves me more intrigued with what I call the “NSF Conundrum”—confusion among stakeholders (not least National Science Foundation [NSF] program officers) about the expected role of an “external evaluator” as described in a proposal or implemented for a funded project. This has been a consistent challenge in our practice, and is increasingly common among other agencies’ programs (e.g., Departments of Education or Labor). The good news is that a solution may be at hand…

Lessons Learned – The most constructive distinction here is between (a) studying the innovation of interest, and (b) studying the implementation and impact of the activities required for that inquiry. For this conversation, call the former “research” (following NSF’s lead) and the latter “evaluation”—or more particularly “program evaluation,” to further elaborate the differences. Grantees funded by NSF (and increasingly by other agencies) are called “Principal Investigators.” It is presumed that they are doing some kind of research. The problem is that their research sometimes looks like, or gets labeled “evaluation.”

Hot Tip – If it seems like this is happening (purposes and terms are muddled), reframe planning conversations around the differences described above—again, between research, or more accurately “research and development” (R&D) of the innovation of interest, and assessments of the quality and results of that R&D work (“evaluation” or “program evaluation”).

Hot Tip – When reframing planning conversations, take into consideration the new-for-2013 Common Guidelines for Education Research and Development developed by NSF and US ED Institute of Education Sciences (IES). The Guidelines delineate six distinct types of R&D, based on the maturity of the innovation being studied. More importantly, they clarify “justifications for and evidence expected from each type of study.” Determine where in that conceptual framework the proposed research is situated.

Hot Tip – Bearing that in mind, explicate ALL necessary R&D and evaluation purposes associated with the project in question. Clarify questions to be answered, data requirements, data collection and analysis strategies, deliverables, and roles separately for each purpose. Define, budget, assign, and implement the R&D and the evaluation, noting that some data may support both. Finally, note that the evaluation of research activities poses interesting conceptual and methodological challenges, but that’s a different tip for a different day…

Rad Resources – The BetterEvaluation site features an excellent article framing the research-evaluation distinction: Ways of Framing the Difference between Research and Evaluation.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.