GSNE TIG Week: Making a Distinction Between Research and Evaluation: A Dialogue by Ian Burke and Seema Mahato

This week, AEA365 is hosting GSNE Week with our colleagues in the Graduate Student and New Evaluators AEA Topical Interest Group. The contributions all this week to AEA365 come from our GSNE TIG members. We hope you enjoy! -Liz DiLuzio, Lead Curator


Hi, our names are Ian Burke and Seema Mahato. We are early-career evaluators with doctoral level training in Statistics and Evaluation, respectively, and have had the opportunity to work and write together several times over the last few years. While developing an approach to literature review during evaluation planning, we found a need to come to a consensus about the definitions of “research” and “evaluation”. An excerpt of that conversation, based on our recollections, is below.

Ian: My academic training was in a program centered around education research, but my introduction to evaluation was as a form of applied research. Through my involvement with AEA, I came to understand that there is often a need to distinguish between the practice of “research” and “evaluation.” How did your experiences in your graduate program and your early-career work impact your understanding of the two types of practice?

Seema: Program evaluation was part of the coursework for my doctoral degree in research methods and evaluation. In addition, applied learning projects, professional service at AEA and OPEG, and pro bono consulting helped solidify my understanding of evaluation science. These experiential learning opportunities exposed me to the political nature of evaluations, especially the role of power that bears upon evaluation activities. Having designed both research and evaluation projects, I could see the nuanced differences between research and evaluation strategies. My first big lesson was that researchers get to choose their research questions while evaluators have to work with questions presented by the program staff or evaluation commissioners.

Ian: That makes a lot of sense! If we look at evaluation and research as two distinct approaches, then we can make distinctions between how different parts of the process will be used. If we are developing a plan for literature review in evaluation, how would the approach differ?

Seema: The approach differs in terms of purpose, i.e., for research projects, the goal is to explicate study significance while, for evaluation projects, the purpose is to facilitate evaluation planning. 

Our full dialog would not fit within an AEA365 blog, but we want to give readers some idea of our conversation. We settled on a consensus similar to that introduced by Vo and Alkin, focused on how the goals of research and evaluation differ. This led to further discussion of how to use skills often introduced as research skills in an evaluation context. We have summarized some of our thoughts on this topic in the Lessons Learned section.

Lessons Learned:

  • While there are many ways to frame the relationship between research and evaluation, we arrived at a consensus that fit our needs.
  • Understanding evaluation as having a focus on being usable by stakeholders – as opposed to researchers’ focus on building and testing theories – leads to different approaches to tasks such as literature review.
  • The situational nature of evaluations means that evaluations lead to analytic generalizations as opposed to generalization of findings from a research project.
  • Power, in terms of access to information, plays a major role in the success or failure of an evaluation. Information provided by program staff affects the evaluation approach. Thus, there is a practical as well as an ethical incentive for evaluators and program stakeholders to share a methodological vocabulary.
  • While the skills used in evaluation and academic research are similar, thinking about the needs of evaluation and research users leads to different applications of these skills.
  • Understanding these differences through dialogue (peer-to-peer conversations) and reflective practice can be helpful to designing effective methodological approaches.

Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.