I am Melinda Davis, a Research Assistant Professor at the University of Arizona in Psychology, coordinate the Program Evaluation and Research Methods minor, and serve as Editor-in-Chief for the Journal of Methods and Measurement in the Social Sciences. In an ideal world, evaluation studies compare two groups that differ only on the treatment assignment. Unfortunately, there are many ways that a comparison group can differ from the intervention group.
Lesson Learned: As evaluators, we conduct experiments in order to examine the effects of potentially beneficial treatments. We need control groups in order to evaluate the effects of treatments. Participants assigned to a control group usually receive a placebo intervention or the status quo intervention (business-as-usual). Individuals who have been assigned to a treatment-as-usual control group may refuse randomization, drop out during the course of the study, or obtain the treatment on their own. It can be quite challenging to create a plausible placebo condition, or what evaluators call the “counterfactual” condition, particularly for a social services intervention. Participants in a placebo condition may receive a “mock” intervention that differs in the amount of time, attention, or desirability, all of which can result in differential attrition or attitudes about the effectiveness of the treatment. At the end of a study, evaluators may not know if an observed effect is due to time spent, attention received, participant satisfaction, group differences resulting from differential dropout rates, or the active component of treatment. Many threats to validity can appear as problems with the control group, such as maturation, selection, differential loss of respondents across groups, and selection-maturation interactions (see Shadish, Cook and Campbell, 2002).
Cool Trick: Shadish, Clark and Steiner demonstrate an elegant approach to the control group problem. While the focus of their study was not control group issues, their doubly randomized preference trial (DRPT) included a well-designed control group. Some participants were randomized to math or vocabulary treatment whereas the other group was randomized into their choice of instruction.
The evaluators collected math and vocabulary outcomes for all participants throughout the study. The effects of the vocabulary intervention on the vocabulary outcome, the effects of the mathematics intervention on the mathematics outcome, and changes across the treated versus untreated condition could be compared, taking covariates into account. This design allowed the evaluators to parse out the effects of participant bias, and the effect of treatment on the outcomes.
As evaluators, it is helpful to be aware of potential threats to validity and novel study designs that we can use to reduce such threats.
The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.