Experiments TIG Week: More Things You Thought You Couldn’t Learn from a Randomized Experiment… But You Can… by Steve Bell

Hello, again!  It’s Steve Bell here, that evaluator with Abt Associates who is eager to share some insights regarding the learning potential of social experiments. In a week-long blog series, we are examining concerns about social experiments to offer tips for how to avoid common pitfalls and to support the extension of this powerful research method to wider applications.

Today we turn to three apparent drawbacks in what experiments can teach us.  Perhaps you’ve heard these concerns:

  • “You can’t randomize an intervention that seeks to change a whole community and its social systems.”
  • “If you put some people into an experiment it will affect other people you’ve left out of the study.”
  • “The impacts of individual program components are lost in the overall ‘with/without’ comparison provided by a social experiment.”

Examination of these three perspectives implies that none of them should deter the use of randomized experiments.

First, evaluations of community-wide interventions are prime candidates for application of the experimental method if the policy questions to be addressed are sufficiently important to justify the resources required.  The U.S. is a very large nation, with tens of thousands of local communities or neighborhoods that could be randomly assigned into or out of a particular community-level policy or intervention.  There is no feasibility constraint to randomizing many places, only a willingness constraint.  And sure, community saturation interventions make data collection more difficult and expensive, and any impacts that do occur are harder to find because they tend to be diffused across many people in the community.  However, these drawbacks afflict any impact evaluation of a saturation intervention, not just randomized experiments.

Second, in an interconnected world, some consequences of social policies inevitably spill over to individuals not directly engaged in the program or services offered. This is a measurement challenge. All research studies, including experimental studies, that are based exclusively on data for individuals participating in an intervention and a sample of unaffected non-participants will miss some of the intervention’s effects.  Randomization does not make spillover effects more difficult to measure.

The up/down nature of experimental findings is thought to limit the usefulness of social experiments as a way to discover how a program can be made more effective or less costly through changes in its intervention components.  One response is obvious: randomize more things, including components.  Multi-stage random assignment also can be used to answer questions about the effects of different treatment components when program activities naturally occur in sequence

The bottom line:  Don’t let naysayers turn society away from experimental designs without first thinking through what is achievable.

Up for our final discussion tomorrow: The “biggest complaints” about experiments debunked.

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.