I’m Paul Bakker, the founder and lead consultant of Social Impact Squared. I help agencies with a social purpose understand, measure, communicate, and improve their outcomes. One of the services I provide is data analysis, so I deal with statistical significance quite a lot.
Hot Tip: Abandon the 95% rule. In statistics classes, they teach you to reject the null hypothesis (of no difference) if there is less than a 5% chance that the hypothesis is true. It makes sense to provide students working on text book examples with a rule of thumb, but people don’t use such a rule when making real-life decisions. Before you analyze your data, discuss with your clients and the relevant decision makers the level of confidence they need to make a decision. Maybe they want to be 95% confident, or maybe being 80% confident is good enough for them to act.
Hot Tip: Consider the need to adjust for increases in error due to multiple tests. Often, you need to run multiple tests to analyze your data. The chance that at least one test will conclude that there is a difference when there isn’t is equal to:
For instance, if you ran 20 tests at the 95% significance level, then the chance that at least one of those tests provides you with a wrong answer is:
The typical advice is to increase the significance level of each test. However, consider the following possible scenario. Out of those 20 tests, 6 tests are significant at the 95% level, but only one is significant at the 99% level. What is more important to your client? Acting on one incorrect difference or not acting on four real differences?
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.