AEA365 | A Tip-a-Day by and for Evaluators

TAG | bias

My name is Di Cross from Clarivate Analytics. We conduct evaluations of scientific research funded by government agencies, non-profits, academic institutions or industry.

I cringe when I hear mention of ‘unbiased analysis’. What an oversimplification to state that an analysis (or evaluation) is unbiased! Everyone carries their own biases. Some exist as part of our brain’s internal wiring to enable us go about our day without being paralyzed by the tremendous amount of information that our sensory systems constantly receive.

But what specifically do I mean by bias?

In statistics, bias in an estimator is the difference between the expected value of the estimator and the population parameter which it is intended to measure. For example, the arithmetic average of random samples taken from a normal distribution is an unbiased estimator of the population average. As even Wikipedia points out, ‘bias’ in statistics does not carry with it the same negative connotation it has in common English. However, this is in the absence of systematic errors.

Systematic errors are more akin to the common English definition of bias: ‘a bent or tendency’, ‘an inclination of temperament or outlook; especially…a personal and sometimes unreasoned judgment, prejudice; an instance of such prejudice.’

So what do we do?

Hot Tip #1: Don’t panic!

Do not fool yourself into thinking that you can design and conduct evaluations which are 100% free of bias. Accept that there will be bias in some element of your evaluation. But of course, do your best to minimize bias where you can.

Hot Tip #2: Develop a vocabulary about bias

There are many sources of bias. Students in epidemiology, the discipline from which I approach evaluation, study selection bias, measurement error including differential and non-differential misclassification, confounding, and generalizability. There are also discussions of bias specific to evaluation.

Hot Tip #3: Adjust your design where possible

After identifying potential sources of bias in your study design, address them as early in your evaluation as possible – preferably during the design phase. Alternatively, addressing bias might also mean performing analysis differently, or skipping to Hot Tip #4.

(Note: There is something to be said accepting a biased estimator – or, dare I say, a biased study design – over one that is unbiased. This might be because the unbiased estimator is vastly more expensive than the biased estimator which isn’t too far off the mark. Or it might be for reasons of risk: Wouldn’t you rather consistently underestimate the time it takes to bake a batch of cookies, rather than be right on average, but risk having to throw away a charred batch half of the time?)

Hot Tip #4:  Be transparent

Where it is not possible to address bias, describe it and acknowledge that it exists. Take it into consideration in your interpretation. As a prior AEA blog writer put it, ‘out’ yourself. Be forthcoming about sources of bias and communicate their effect on your evaluation to your audience.

The American Evaluation Association is celebrating Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to aea365 come from our RTD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is part of a series remembering and honoring evaluation pioneers leading up to Memorial Day in the USA on May 30.

My name is Mel Mark, a former AEA President and former editor of the American Journal of Evaluation. Don Campbell used pithy phrases to communicate complex philosophical or methodological issues. My favorite was: “Cousin to the amoeba, how can we know for certain?” This encapsulates his philosophy of science which informed his contributions to evaluation.

Pioneering and enduring contributions:

Campbell’s pioneering contributions included work on bias in social perception, intergroup stereotyping, visual illusion, measurement, research design and validity, and evaluation, which was at the center of his vision of “an experimenting society.” He believed in the evolution of knowledge through learning: “In science we are like sailors who must repair a rotting ship while it is afloat at sea. We depend on the relative soundness of all other planks while we replace a particularly weak one. Each of the planks we now depend on we will in turn have to replace. No one of them is a foundation, nor point of certainty, no one of them is incorrigible.”

Donald T. Campbell

Donald T. Campbell

Campbell’s work reminds us that every approach to evaluation is founded in epistemological assumptions and that being explicit about those assumptions, and their implications, is part of our responsibility as evaluators. Campbell wanted science, and evaluation, to keep the goal of truth, testing and inferring what is real in the world. But he acknowledged this goal as unattainable so “we accept a . . . surrogate goal of increasing coherence even if we regard this as merely our best available approximation of the truth.”

Campbell was an intellectual giant but disarmingly modest. He was gracious and helpful to students and colleagues, and equally gracious to his critics. His openness to criticism and self-criticism modeled his vision of a “mutually monitoring, disputatious community of scholars.” Those who knew Don Campbell know with all the certainty allowed to humans just how special he was.

Reference for quotations:

Mark, M. M.(1998). The Philosophy of science (and of life) of Donald T. Campbell,

American Journal of Evaluation, 19, 3: 399-402.

 Resources:

Bickman, L., Cook, T. D., Mark, M.M., Reichardt, C.S., Sechest, L., Shadish, W.R., & Trochim, W.M.K. (1998). Tributes to Donald T. Campbell. American Journal of Evaluation, 19(3): 397-426.

Brewer, M.B. & Collins, B.E. (Eds.) (1981) Scientific inquiry and the social sciences: a volume in honor of Donald T. Campbell. Jossey-Bass.

Campbell, D.T. (1994). Retrospective and prospective on program impact assessment. American Journal of Evaluation, 15 (3): 291-298.

Campbell, D.T. & Russo, J. (2001). Social measurement. Sage.

The American Evaluation Association is celebrating Memorial Week in Evaluation: Remembering and Honoring Evaluation’s Pioneers. The contributions this week are remembrances of evaluation pioneers who made enduring contributions to our field. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello from beautiful Kalamazoo, Michigan. My name is Tammi Phillippe. I am a student in my first official evaluation course. Western Michigan University in Kalamazoo is home to the Evaluation Café where Dr. Rodney Hopson, AEA President Elect, recently gave a talk titled, Evaluation and the Public Good: Toward Whose Good, Whose Benefit and to What End?

As a student of evaluation, I find myself wondering how I can – in practice – leave behind personal preferences, values, education, and possible biases and open myself to experiencing and learning about the evaluand with fresh eyes. I had the opportunity to ask Dr. Hopson the following question: “How do you approach evaluation or even education with your personal lens fixed in front of you and still be able to provide bias-free service?”

Lessons Learned – Dr. Hopson recounted a story of taking freshman education students into an urban area in East Pittsburgh and allowing those students to confront their own prejudices as a way to make space for new ideas and cultural sensitivity. The students had a number of experiences outside of their comfort zone and this helped them find a wider zone from which they could develop into better, more culturally sensitive teachers. He asserted that we cannot step away from our own preferences, biases, backgrounds and knowledge; rather, these are tools in our toolboxes that we bring to the work we do. Our responsibility is to “out” our position deliberately and be honest with our clients, our values, and ourselves.

Hot Tip – Know yourself, know your values. Within our role as evaluators, the task is to be true to our special interests and look at our multiple missions. Our responsibility within the context of our work is to move between “value-interested” and “value-committed” positions with full disclosure. Dr. Hopson clearly sees evaluation as an opportunity to “challenge, disrupt, and make the social order more democratic.” He does not believe that evaluation can be value-neutral; however, we can strive for impartiality.

All this week, we’re highlighting posts from colleagues at Western Michigan University as they reflect on a recent visit from incoming AEA President Rodney Hopson. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· ·

Hello everyone! Our names are Eun Kyeng Baek and SeriaShia Chatters. We are an evaluation team and doctoral students from the University of South Florida. We have served as internal and external evaluators in program evaluations in university settings. Additionally, we have experience in the development and administration of evaluation tools. Serving as an internal evaluator may be associated with several advantages and disadvantages. It is important to consider the risks and rewards equally. Failing to adequately consider the risks can have serious consequences. The following outline is a guide to help internal evaluators identify possible risks throughout the course of an evaluation and how to manage each risk as it may arise.

Lesson Learned

Before you decide to participate:

  1. Consider the possible risks to your occupation: If the evaluation results are not favorable, could you lose your job? If an unforeseen event occurs during the course of the evaluation, could it have adverse effects on your reputation?  Carefully scrutinize all of the possible risks and plan for the worst case scenario.
  2. Consider collaborating with an external evaluator: Can the risks you may encounter be transferred?  Collaborating with an external evaluator may minimize your risk, maximize the depth of your evaluation, and ensure adherence to ethics.

During the evaluation:

  1. Carefully choose which evaluation tools you will use: What is the best way to reduce bias or contamination of the evaluation results? How may your presence impact the results of the evaluation? Consider using tools and techniques that may allow participants to respond anonymously. Be ethical and consider consulting an external evaluator if issues arise.
  2. Be aware of office politics: Are there hidden agendas? Is there an alternative purpose for your evaluation? Carefully choose your evaluation questions.  Ensure documentation doe s not disclose personal information of employees/individuals that may implicate you in the future.

After the evaluation:

  1. Track the evaluation report and monitor the impact: Is the evaluation report being used for the purposes it was not originally intended for?  How has the evaluation report impacted the work environment? Keep accurate records of your involvement in the evaluation. Keep all information collected during the course of the evaluation confidential, do not discuss your involvement with coworkers.

Above all else, protect yourself. Consider the risks mentioned above, however in some cases the overall risk may be low, but the personal risk may be too much for you to handle. Use your best judgment and ensure you are comfortable with your final decision.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · · ·

Archives

To top