Hi, I’m Juan D’Brot; I represent the National Council on Measurement in Education (NCME) on the Joint Committee on Standards for Educational Evaluation.
Do you remember when you first encountered evaluation as a discipline? I vividly recall my first post-graduate interview, where I was unexpectedly asked to differentiate between research and evaluation (spoiler: I didn’t respond as eloquently or accurately as I would’ve liked).
When thinking about research and evaluation, we often confront questions that conflate the two. How do we know an intervention works? What works well? What needs to be changed? What can be applied to other contexts?
You might be surprised (as I once was) that researchers don’t typically tackle those questions. While research delves into generalizability and minimizes bias, evaluation is more targeted and can address the how, who, what, when, where, and even the why, making it a crucial tool for understanding the practical implications of research. This is why understanding the distinctions between research and evaluation is so important.
Lessons Learned
The following table is my attempt to raise distinctions between research and evaluation that are informed by many influential evaluators (see Patton; Rossi, Lipsey, & Freeman; Mertens; Fitzpatrick, Sanders, and Worthen; Scriven; and Shadish, Cook, & Leviton in Rad Resources)
Evaluation | Research | Evaluation |
Purpose | Generates new knowledge, theories, or insights. | Assesses merit, worth, and effectiveness. |
Focus | Hypothesis testing; exploratory (theory-building) or confirmatory (theory-testing). | Assesses programs or interventions; applied and context-specific. |
Timeframe | Depends on the cross-sectional or longitudinal design of the research question(s). | Aligned with the duration or design of the program. |
User Orientation | Can inform policy or practice, but often contributes knowledge to academic or scientific communities | Users can include program managers, policymakers, funders, and those directly involved in or affected by the program. |
Methods | Employs various methods, including experiments, surveys, case studies, interviews, and statistical analyses. Reliability and validity are prioritized. | Methods vary but often include mixed methods such as surveys, interviews, observations, focus groups, and document reviews to assess program processes, outcomes, and impacts. |
Reporting and Use | Often published in academic journals, books, or conference proceedings to contribute to the knowledge of a given field. | Intended to be practical and actionable. They are often communicated through reports, presentations, or recommendations. |
Bias and Generalizability | Minimizes bias and increases the generalizability of findings through rigorous study designs and methods. | Acknowledges the potential for bias inherent in evaluating real-world programs and seeks to balance rigor with relevance to users’ specific needs and contexts. |
Rad Resources
- For a high-level list of approaches, check out Better Evaluation. Whether you’re a new or seasoned evaluator, you will likely come across some familiar types.
- Get to know Christie & Alkin’s (2008) Evaluation Tree theory. Each trunk and set of branches represent distinct perspectives or methods of evaluation theory.
- For more on what makes an evaluator, delve into the American Evaluation Association (2018) Competencies.
- Program evaluation: Alternative approaches and practical guidelines by Jody Fitzpatrick, James Sanders, Blaine Worthen, and Lori Wingate
- Research and evaluation in education and psychology: Integrating diversity with quantitative, qualitative, and mixed methods by Donna Mertens
- Utilization-Focused Evaluation by Michael Quinn Patton
- Evaluation: A systematic approach by Peter Rossi, Mark Lipsey, and Howard Freeman
- Evaluation thesaurus by Michael Scriven
- Foundations of program evaluation: Theories of practice by William Shadish, Thomas Cook, and Laura Leviton
This week, we’re diving the Program Evaluation Standards. Articles will (re)introduce you to the Standards and the Joint Committee on Standards for Educational Evaluation (JCSEE), the organization responsible for developing, reviewing, and approving evaluation standards in North America. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.