Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

PE Standards Week: Sharpening the Distinction between Research and Evaluation by Juan D’Brot

Hi, I’m Juan D’Brot; I represent the National Council on Measurement in Education (NCME) on the Joint Committee on Standards for Educational Evaluation.

Do you remember when you first encountered evaluation as a discipline? I vividly recall my first post-graduate interview, where I was unexpectedly asked to differentiate between research and evaluation (spoiler: I didn’t respond as eloquently or accurately as I would’ve liked).

When thinking about research and evaluation, we often confront questions that conflate the two. How do we know an intervention works? What works well? What needs to be changed? What can be applied to other contexts?

You might be surprised (as I once was) that researchers don’t typically tackle those questions. While research delves into generalizability and minimizes bias, evaluation is more targeted and can address the how, who, what, when, where, and even the why, making it a crucial tool for understanding the practical implications of research. This is why understanding the distinctions between research and evaluation is so important.

Lessons Learned

The following table is my attempt to raise distinctions between research and evaluation that are informed by many influential evaluators (see Patton; Rossi, Lipsey, & Freeman; Mertens; Fitzpatrick, Sanders, and Worthen; Scriven; and Shadish, Cook, & Leviton in Rad Resources)

EvaluationResearchEvaluation
PurposeGenerates new knowledge, theories, or insights.Assesses merit, worth, and effectiveness.
FocusHypothesis testing; exploratory (theory-building) or confirmatory (theory-testing).Assesses programs or interventions; applied and context-specific.
TimeframeDepends on the cross-sectional or longitudinal design of the research question(s).Aligned with the duration or design of the program.
User OrientationCan inform policy or practice, but often contributes knowledge to academic or scientific communitiesUsers can include program managers, policymakers, funders, and those directly involved in or affected by the program.
MethodsEmploys various methods, including experiments, surveys, case studies, interviews, and statistical analyses. Reliability and validity are prioritized.Methods vary but often include mixed methods such as surveys, interviews, observations, focus groups, and document reviews to assess program processes, outcomes, and impacts.
Reporting and UseOften published in academic journals, books, or conference proceedings to contribute to the knowledge of a given field.Intended to be practical and actionable. They are often communicated through reports, presentations, or recommendations.
Bias and GeneralizabilityMinimizes bias and increases the generalizability of findings through rigorous study designs and methods.Acknowledges the potential for bias inherent in evaluating real-world programs and seeks to balance rigor with relevance to users’ specific needs and contexts.

Rad Resources


This week, we’re diving the Program Evaluation Standards. Articles will (re)introduce you to the Standards and the Joint Committee on Standards for Educational Evaluation (JCSEE), the organization responsible for developing, reviewing, and approving evaluation standards in North America. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.