Kirk Knestis, CEO of Hezel Associates, back again, following up on a previous post about how evaluators’ work in STEM education settings is being influenced by the Common Guidelines for Education Research and Development introduced by the National Science Foundation (NSF) and U.S. Department of Education. Hezel Associates studies education innovations so regularly supports organizations proposing grant-funded R&D projects in science, technology, engineering, and math education (STEM). Sometimes we’re a research partner (typically providing Design and Development Research, Type #3 in the Guidelines); while in other cases we serve as an external evaluator (more accurately, “program evaluator”) assessing the implementation and impact of proposed project activities, including the research.
Lessons Learned – Work with a wide variety of clients (more than 70 proposals so far in 2014!) has left me convinced that an evaluator—or research partner, if your job is framed that way—can do a few specific things that can add substantial value to development of a client’s proposal. Someone in an external evaluator/researcher role can do more than simply “write the evaluation section,” potentially improving the likelihood for proposal success.
Hot Tips – 1. Help designers explicate the theory of action of their innovation (intervention, program, technology, etc.) being tested and developed. Any research study aligned with the Guidelines (for example, many if not most NSF projects) will be expected to build on a clearly defined theoretical basis. Evaluators ought to be well equipped to facilitate development of a logic model to serve that purpose, illustrating connections between elements or features of the innovation and its intended outcomes.
- Define the appropriate “type” of research . The Common Guidelines provide a typology of six purposes for research, ranging from Foundational Research contributing to basic understandings of teaching and learning; to Scale-up Research, examining if the innovation retains its effectiveness for a variety of stakeholders, when implemented in different settings “out in the wild” without substantial developer support. A skilled evaluator can help the client select the appropriate kind of research given the level of maturity of the innovation and other factors.
- Help clarify distinctions between “research” and “evaluation” purposes, roles, and functions. Clarity on the type of research required will inform study design, data-collection, analysis, and reporting decisions. A good evaluator should be able to help determine the expertise required for the research, requirements for external evaluation of that work, and the narrative explaining roles, responsibilities, and work plans required for a proposal.
Rad Resource – If you work with education clients, become familiar with the Common Guidelines for Education Research and Development. Some complex conversations loom but they will be an important consideration in conversations about research and evaluation in education in the coming years.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.