Greetings! My name is Juan D’Brot, Senior Associate at the Center for Assessment. I serve as the National Council on Measurement in Education (NCME) representative for the Joint Committee on Standards for Educational Evaluation and will be discussing the relationship among measurement, research, and evaluation.
My first job out of graduate school (Research and Evaluation Specialist) needed someone who could navigate the interplay among measurement, research, and evaluation. During my interview, I (like an amateur) mistakenly used the terms research and evaluation interchangeably. While my work helped me crystallize the interconnected nature between the three, I wish I understood it earlier in my career. One could think of their interaction like this:
Any of these three could work, but you need to know ahead of time when and where to use the methods for each. Here are three main takeaways I’ve learned.
Lesson Learned 1: Be clear about your purposes and intended uses.
Consider the following (abridged) definitions:
- Measurement focuses on accumulating evidence to support specific interpretations of scores for an intended purpose or use
- Research tests theories to generalize findings and contributes to a larger body of knowledge
- Evaluation is a systematic method for collecting, analyzing, and using information to evaluate or improve a program
Depending on the purpose of your work, you may use some or all of these things. Beginning with the end in mind will help organize your efforts.
Lesson Learned 2: Borrow from other disciplines where appropriate.
It’s important to know how each approach can serve the other. Consider this quote:
Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted. –William Bruce Cameron (1963)
Recognize the value in using literature to understand the context, intent, and interpretations of tools that are used in evaluations. Leverage other disciplines to help understand those context-dependent and context-independent conditions that inform good evaluations.
Lesson Learned 3: Extend your findings beyond your own project or discipline.
Generalizing beyond a specific program can be challenging. However, we should strive to identify context dependent or independent conditions for future evaluators. Correspondingly, researchers should extend findings to interpret results in other programs or contexts.
I believe we can place a greater emphasis on the impact and applicability of our research in measurement. Generalization and contribution of knowledge are important, but we should better extend findings to the real-world. The Program Evaluation Standards can be useful in helping us apply and communicate our research more broadly. By applying an evaluative perspective we can better apply our research to determine the effectiveness of programs, models, or systems.
This week, we’re diving the Program Evaluation Standards. Articles will (re)introduce you to the Standards and the Joint Committee on Standards for Educational Evaluation (JCSEE), the organization responsible for developing, reviewing, and approving evaluation standards in North America. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.