Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

EERS Week: Planning Study Sample Sizes by Eric Hedberg

My name is Eric Hedberg and I am a Senior Research Scientist at NORC at the University of Chicago. My role at NORC is best described as a social and behavioral research methodologist.

“Power” in study planning refers to the probability of detecting an effect should it exist given a sample design and size. My friends in the music industry are always talking about the size of microphones—bigger microphones can detect really quiet sounds, smaller microphones (like on your cellphone) can only pick up loud noises. Think of sample size and design as microphones, and your program effect as the sound. If your program has a small effect, you need a big sample (or microphone). If your program has a large effect, a smaller sample will do. If you bring a small sample to a small effect, you will not hear anything. In other words, your study will be underpowered.

At stake in power analysis is the chance to learn something from your study. When studies are underpowered and do not find an effect, the reason is difficult to figure out: maybe an effect does exist but you couldn’t detect it, or maybe there is no effect. Hard to tell, so time and money are wasted. Unrealistic assumptions typically spell disaster for studies.

Hot Tips:

Making assumptions is hard. How can you get a reasonable expectation of your effect? I suggest searching for two types of studies: those that seek to change your outcome (e.g., math achievement), and those that use an intervention similar to your intervention on other outcomes (e.g., coaching reading programs). Between the two, you will learn how much the needle can move on your outcome, and how much your intervention moves needles.

Cool Tricks:

Meta-analyses provide the best source of effects, and finding a meta-analysis about a topic similar to your intervention is probably your best bet. Meta-analysis literature offers formulas for converting different types of effects, for example, odds ratios and correlations can be converted into Cohen’s d statistics.

Rad Resources:

  • Northwestern University’s Intraclass Correlation Database provides estimates for design parameters for educational outcomes
  • The William T. Grant Foundation’s Optimal Design Software for Multi-level and Longitudinal Research estimates sample size needed based on certain parameters
  • Learn more during my workshop at the EERS 2019 conference

Lessons Learned:

Remember that power analysis is about probabilities. If you power your study at .80, you have an 80% chance of success. Are you comfortable with a 20% chance of failure? If at all possible, get the optimal sample that maximizes power.

 

The American Evaluation Association is celebrating Eastern Evaluation Research Society (EERS) Affiliate Week. The contributions all this week to aea365 come from EERS members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

1 thought on “EERS Week: Planning Study Sample Sizes by Eric Hedberg”

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.