Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Experiments TIG Week: The Fidelity of Policy Comparisons: Do Social Experiments Inevitably Distort the Programs They Set Out to Study? by Steve Bell

Hello.  I am Steve Bell, Research Fellow at Abt Associates specializing in rigorous impact evaluations, here to share some thoughts about experimental evaluations in practice.  In this week-long blog series, we are examining concerns about social experiments to offer tips for how to avoid common pitfalls and to support the extension of this powerful research method to wider applications.

Today, we ask whether randomization necessarily distorts the intervention that an experiment sets out to evaluate. A potential treatment group distortion occurs when the experiment excludes a portion of a program’s normally-served population to form a research “control” group. As a result, either (1) the program serves fewer people than usual, operating below normal capacity, or (2) it serves people who ordinarily would not be served.  The first scenario can be problematic if the slack capacity allows programs to offer participants more services than usual, artificially enhancing the intervention when compared to its normal state. The second scenario can be problematic if the people who are now being served are different than those ordinarily served.  For example, if a program changes its eligibility criteria—for example, lowering background educational requirements—then a different group of people is served, and this might lead to larger or smaller program impacts than would be the case for the standard program targets.  Fortunately, Olsen, Bell and Nichols (2016) have proposed a way to identify which individuals would ordinarily have been served so that impact results can be produced for just that subset.

The problem of a different-than-usual participant population diminishes in degree as the control group shrinks in size relative to the studied program’s capacity.  With few control group members in any site, the broadening of the pool of people served by the program is less substantial.  This supports another solution: where feasible, an evaluation should spread a fixed number of control group members across many local programs, creating only a few individual control group cases in any one community.  This is a desirable option as well for program staff who are often hesitant to turn away many applicants to form a control group.

In sum, social experiments need not distort the programs they set out to study.

Up for discussion tomorrow: Practitioner insights on how to overcome some common administrative challenges to running an experiment.

Rad Resource:

For additional detail on this issue of the fidelity of policy comparisons, as well as other issues that this week-long blog considers, please read On the Feasibility of Extending Social Experiments to Wider Applications.

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.