Hello! I’m Kirk Knestis, CEO of Hezel Associates. The US Office for Human Research Projections defines “research” as any “systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.” (Emphasis mine.) We often get wrapped up in narrow distinctions like populations but I’m increasingly of the opinion that the clearest determination if a study is “evaluation” or “research,” is whether it’s supposed to contribute to generalizable knowledge—in this context, about teaching and learning in science, technology, engineering, and math (STEM).
The National Science Foundation (NSF) frames this as “intellectual merit”—one of two merit criteria against which research projects are judged, its “potential to advance knowledge” in a program’s area of focus. The Common Guidelines for Education Research and Development expand on this, elaborating how each of its six types of R&D might contribute, in terms of theoretical understandings about the innovation being studied and its intended outcomes for stakeholders.
For impact research (Efficacy, Effectiveness, and Scale-up studies), dissemination must include “reliable estimates of the intervention’s average impact” (p. 14 of the Guidelines), so findings from inferential tests of quantitative data. Dissemination might, however, be about theories of action (relationships among variables; preliminary, evolving, or well-specified), or an innovation’s “promise” to be effective later in development. This is, I argue, the most powerful aspect of the Common Guidelines typology; it elevates Foundational, Early Stage/Exploratory, and Design and Development studies to be legitimate “research.”
So, that guidance defines what might be disseminated. Questions will remain about who will be responsible for dissemination, when it will happen, and by what channels it will reach desired audiences.
Lessons Learned:
It will likely be necessary for the evaluation research partner to work with client institutions to help them with dissemination. Many grant proposals require dissemination plans, but they are typically the purview of the grantee, PI, or project manager, rather than the “evaluator.” These individuals may well need help describing study designs, methods, and findings in materials to be shared with external audiences, so think about how deliverables can contribute to that purpose (e.g., tailoring reports for researchers, practitioners, and/or policy-makers in addition to project managers and funders).
Don’t wait until a project is ending to worry about dissemination of learnings. Project wrap-ups are busy enough and interim findings or information about methods, instruments, and emerging theories can make substantive contributions to broader understandings relating to the project.
Rad Resource:
My talented colleague-competitor Tania Jarosewich (Censeo Group) put together an excellent set of recommendations for high quality dissemination of evaluation research findings, for a panel I shared with her at Evaluation 2014. I can’t do it justice here so go check out her slides in that presentation in the AEA eLibrary.
The American Evaluation Association is celebrating Research vs Evaluation week. The contributions all this week to aea365 come from members whose work requires them to reconcile distinctions between research and evaluation, situated in the context of STEM teaching and learning innovations.. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.