Kirk Knestis, here, CEO of Hezel Associates and EERS Communication Chair. My colleagues are very good at researching education innovations and evaluating programs. We serve as RESEARCH PARTNER or EXTERNAL EVALUATOR for research and development (or R&D) projects across a variety of programs, many funded by agencies like the National Science Foundation.
Lesson Learned – I emphasize the distinction between the above roles very purposefully. The first studies an innovation being developed, to inform iterative design and assess its promise of impact; the second examines the implementation and results of those R&D efforts. Both require collection and analysis of data but it’s easy to get them tangled up where they meet when planning a research project or writing a proposal. We’ve come to understand that the simplest way to keep things straight is often to work backward, first asking how best to evaluate R&D activities, then designing the research and development processes in a separate discussion. We frankly need to use more robust methods than the “panel review” evaluation approaches to which we’ve typically defaulted.
Hot Tip – If one party is implementing an R&D project consistent with precepts of the Common Guidelines for Education Research and Development, and another is charged with program evaluation for that effort, make sure that everyone involved is on the same page regarding the Purpose, Justification Guidelines, and Guidelines for Evidence to be Produced detailed in the Guidelines for the type of research being designed. These attributes define the quality of the R&D so should guide evaluation of implementation and results.
Hot Tip – Equally, you may largely ignore the Guidelines for External Feedback Plans in that document. That list of possible structures for organizing evaluation activities provides little useful guidance beyond raising the possibility of peer review. Unfortunately, that bears practically only on published reports of Efficacy, Effectiveness, and Scale-up Research (per the Guidelines), so it’s not an answer for many—perhaps most—R&D projects.
Rad Resource – At least, I hope it’s rad. Hezel Associates has developed two conceptual models for evaluating education R&D projects—one adapted from ideas shared by Means and Harris at the 2013 AERA conference, and a second created from scratch by Hezel Associates. The latter is tightly aligned to purpose, justification, and evidence guidelines for Design and Development Research (Type #3), where most of our partners’ projects are situated. The document linked-to above is tailored to NSF Advanced Technological Education audiences, but the frameworks should be broadly applicable. If interested, take a look. If they’re not rad or if you have ideas to make them more rad, please share them. Better yet, come to the 2015 EERS conference we can talk more in person.
The American Evaluation Association is celebrating Eastern Evaluation Research Society (EERS) Affiliate Week. The contributions all this week to aea365 come from EERS members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.