AEA365 | A Tip-a-Day by and for Evaluators

May/17

15

RTD TIG Week: Bias – it’s my favorite four-letter word by Di Cross

My name is Di Cross from Clarivate Analytics. We conduct evaluations of scientific research funded by government agencies, non-profits, academic institutions or industry.

I cringe when I hear mention of ‘unbiased analysis’. What an oversimplification to state that an analysis (or evaluation) is unbiased! Everyone carries their own biases. Some exist as part of our brain’s internal wiring to enable us go about our day without being paralyzed by the tremendous amount of information that our sensory systems constantly receive.

But what specifically do I mean by bias?

In statistics, bias in an estimator is the difference between the expected value of the estimator and the population parameter which it is intended to measure. For example, the arithmetic average of random samples taken from a normal distribution is an unbiased estimator of the population average. As even Wikipedia points out, ‘bias’ in statistics does not carry with it the same negative connotation it has in common English. However, this is in the absence of systematic errors.

Systematic errors are more akin to the common English definition of bias: ‘a bent or tendency’, ‘an inclination of temperament or outlook; especially…a personal and sometimes unreasoned judgment, prejudice; an instance of such prejudice.’

So what do we do?

Hot Tip #1: Don’t panic!

Do not fool yourself into thinking that you can design and conduct evaluations which are 100% free of bias. Accept that there will be bias in some element of your evaluation. But of course, do your best to minimize bias where you can.

Hot Tip #2: Develop a vocabulary about bias

There are many sources of bias. Students in epidemiology, the discipline from which I approach evaluation, study selection bias, measurement error including differential and non-differential misclassification, confounding, and generalizability. There are also discussions of bias specific to evaluation.

Hot Tip #3: Adjust your design where possible

After identifying potential sources of bias in your study design, address them as early in your evaluation as possible – preferably during the design phase. Alternatively, addressing bias might also mean performing analysis differently, or skipping to Hot Tip #4.

(Note: There is something to be said accepting a biased estimator – or, dare I say, a biased study design – over one that is unbiased. This might be because the unbiased estimator is vastly more expensive than the biased estimator which isn’t too far off the mark. Or it might be for reasons of risk: Wouldn’t you rather consistently underestimate the time it takes to bake a batch of cookies, rather than be right on average, but risk having to throw away a charred batch half of the time?)

Hot Tip #4:  Be transparent

Where it is not possible to address bias, describe it and acknowledge that it exists. Take it into consideration in your interpretation. As a prior AEA blog writer put it, ‘out’ yourself. Be forthcoming about sources of bias and communicate their effect on your evaluation to your audience.

The American Evaluation Association is celebrating Research, Technology and Development (RTD) TIG Week with our colleagues in the Research, Technology and Development Topical Interest Group. The contributions all this week to aea365 come from our RTD TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

No comments yet.

Leave a Reply

<<

>>

Archives

To top