Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Advocacy vs Objectivity in Evaluation by Yuan(Linda) Zhou

Hello everyone! I am Yuan (Linda) Zhou. Besides academic training in agricultural economics. I am a current student of Graduate Diploma in Public Policy and Program Evaluation at Carleton University (Ontario, CA).

 As I am learning and practicing evaluation along with the program, I am building my understanding in research paradigms. I am especially interested in exploring my role as an evaluator in terms of subjectivity and objectivity. In addition, I am fascinated about finding the balance between the efforts to be trusted among the evaluand, stakeholders and the threat of taking the advocacy role during research design, data collection and content analysis.

In the article titled Qualitative research in counseling psychology: A primer on research paradigms and philosophy of science, Ponterott argues that it is necessary to possess awareness of how researcher’s values can influence the process and outcomes of the research. Of course, It is not possible for researchers to eliminate the influence of their values – must be consciously treated as an integral part of research. 

Rad Resources:

Guba and Lincoln, in Judging interpretations: But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation, discussed this topic in terms of who to include in research, how to collect data, and what to do with the findings. Their Later work focused on transformative view of ethics based on critical theory.

Michael Scriven disagrees with Stufflebeam about evaluation being used to inform decisions. He communicated the concept of evaluation in the book Evaluation Thesaurus and discussed goal-free evaluation in article Prose and Cons about Goal-free Evaluation.

Egon Guba and Yvonna Lincoln discussed various research paradigms in Competing paradigms in qualitative research, which offers a concise reading for positivism, postpositivism, critical theory et al., and constructivism. 

In addition, Mertens and Wilson’s book Program Evaluation Theory and Practice: A comprehensive Guide, offers a more detailed discussion.

Rad Resource:

Donna Mertens on Transformative Research can be accessed at: https://www.youtube.com/watch?v=h5R9yqmbQKU

In the lecture, she quoted Marie Battiste that not because we can raise up indigenous people but because society is sorely in need of what aboriginal knowledge has to offer. I think this offers aid to shift focus from deficit (need to help/need to advocate) to what do we benefit by incorporating indigenous perspective or other diversities. 

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

3 thoughts on “Advocacy vs Objectivity in Evaluation by Yuan(Linda) Zhou”

  1. Hi Linda,
    I’m currently a student working on my Professional Master of Education through Queen’s University. I’m just finishing up a course on evaluation and have been designing a program evaluation. In my design, I’ve come to realize that I have a particular bias about the program I am evaluating so you comment that it’s “not possible for researchers to eliminate the influence of their values – must be consciously treated as an integral part of research” really resonated with me. How do I ensure that my bias doesn’t impact the questions I’m asking or, more importantly, my analysis of the responses I receive?

    I really appreciate the rad resource you shared by Donna Mertens. As an educators in inclusive education, we are always looking for a student’s strengths; finding out what their gifts are and nurturing them so that students can achieve their fullest potential.

    1. Hi Kathy,

      As I am working on the evaluation project with my team, we are still trying to be “less biased”.

      My team surveyed young kids (summer drone camp participants) to see if the camp created conditions in terms attitude, awareness and capacity towards to engineering. We constantly ask ourselves that if we are “advocating” that the camp is better than other camps? I must admit that we had some moments that we believed some “anecdotal evidence”. I think the key is that we incorporate the cautious thinking in the evaluation work.

      We are wrapping the project soon and I hope to share our experience here.

      Linda

  2. “I am fascinated about finding the balance between the efforts to be trusted among the evaluand, stakeholders and the threat of taking the advocacy role during research design, data collection and content analysis.”

    Are you referring to advocating for (or against) some or all of the program or for/against an evaluation strategy or process?

Leave a Reply to Gwenn Gröndal Cancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.