Design and Analysis of Experiments

DUP Week: Accessible Evaluation Techniques with June Gothberg

Greetings, I am June Gothberg, Ph.D. from Western Michigan University, Chair of the Disabilities and Underrepresented Populations TIG and co-author of the Universal Design for Evaluation Checklist (4th ed.).   Historically, our TIG has been a ‘working’ TIG, working collaboratively with AEA and the field to build capacity for accessible and inclusive evaluation.  Several terms tend to …

DUP Week: Accessible Evaluation Techniques with June Gothberg Read More »

DUP Week: Using Single Subjects Methodology in Program Evaluation by Brian Molina

Greetings, I am Brian Molina, a graduate student at in Western Michigan University’s Industrial/Organizational Behavior Management doctoral program. I have conducted single subject research and implemented performance improvement projects across many different settings and organizations. Lessons Learned: Single subject research is used to evaluate program effectiveness across large groups. The belief that single subject research …

DUP Week: Using Single Subjects Methodology in Program Evaluation by Brian Molina Read More »

Experiments TIG Week: Biggest Complaints: Experiments have limited external validity, take too long, and cost too much by Laura Peck

Welcome to the final installment of the Design & Analysis of Experiments TIG-sponsored week of AEA365.  It’s Laura Peck of Abt Associates, here again to address some complaints about experiments. Experiments have limited external validity Experimental evaluation designs are often thought to trade internal validity (ability to claim cause-and-effect between program and impact) with external …

Experiments TIG Week: Biggest Complaints: Experiments have limited external validity, take too long, and cost too much by Laura Peck Read More »

Experiments TIG Week: More Things You Thought You Couldn’t Learn from a Randomized Experiment… But You Can… by Steve Bell

Hello, again!  It’s Steve Bell here, that evaluator with Abt Associates who is eager to share some insights regarding the learning potential of social experiments. In a week-long blog series, we are examining concerns about social experiments to offer tips for how to avoid common pitfalls and to support the extension of this powerful research …

Experiments TIG Week: More Things You Thought You Couldn’t Learn from a Randomized Experiment… But You Can… by Steve Bell Read More »

Experiments TIG Week: Administrative Challenges to Running an Experiment in the Field and How I Overcame Them by Lisette Nieves

Hello!  I am Lisette Nieves, founder of Year Up NY, a service organization that has happily and successfully used an experimental evaluation to assess program effectiveness.  Today’s blogpost reflects on administrative challenges that need not get in the way of using experiments in practice.  I strongly believe in the nonprofit sector and what it does …

Experiments TIG Week: Administrative Challenges to Running an Experiment in the Field and How I Overcame Them by Lisette Nieves Read More »

Experiments TIG Week: The Fidelity of Policy Comparisons: Do Social Experiments Inevitably Distort the Programs They Set Out to Study? by Steve Bell

Hello.  I am Steve Bell, Research Fellow at Abt Associates specializing in rigorous impact evaluations, here to share some thoughts about experimental evaluations in practice.  In this week-long blog series, we are examining concerns about social experiments to offer tips for how to avoid common pitfalls and to support the extension of this powerful research …

Experiments TIG Week: The Fidelity of Policy Comparisons: Do Social Experiments Inevitably Distort the Programs They Set Out to Study? by Steve Bell Read More »

Experiments TIG Week: What Can Experimental Evaluations Tell Us? And Why We Should Not Be So Doubtful About What They Won’t Tell Us by Laura Peck

Hello AEA365 readers!  I am Laura Peck, founder and co-chair of the AEA’s recently-established (and growing) Design & Analysis of Experiments TIG.  I work at Abt Associates as an evaluator in the Social & Economic Policy Division and director of Abt’s Research & Evaluation Expertise Center.  Today’s AEA365 blogpost recaps what experimental evaluations typically tell …

Experiments TIG Week: What Can Experimental Evaluations Tell Us? And Why We Should Not Be So Doubtful About What They Won’t Tell Us by Laura Peck Read More »

Experiments TIG Week: The Ethics of Using Experimental Evaluations in the Field by Laura Peck and Steve Bell

Greetings, and welcome to a week’s worth of insights sponsored by the Design and Analysis of Experiments TIG!  We are Laura Peck and Steve Bell, program evaluators with Abt Associates. When deciding how to invest in social programs, policymakers and program managers increasingly ask for evidence of effectiveness.  A strong method for measuring a program’s …

Experiments TIG Week: The Ethics of Using Experimental Evaluations in the Field by Laura Peck and Steve Bell Read More »

Experiments TIG Week: Laura Peck on The Origins and Meaning of the “Black Box” Label and How Innovative Experimental Research is Working Shake It

Hi again, it’s Laura Peck here, that evaluator from Abt Associates.  To close out the Design & Analysis of Experiments TIG’s first week of contributions to the AEA365 blog, I focus on one of the main critiques of experimental evaluations, that known as the “black box” criticism. Experimentally-designed evaluations can isolate the impact of an …

Experiments TIG Week: Laura Peck on The Origins and Meaning of the “Black Box” Label and How Innovative Experimental Research is Working Shake It Read More »

Experiments TIG Week: Keith Zvoch on Strong Program Evaluation Design Alternatives

Keith Zvoch here. I am an Associate Professor at the University of Oregon. In this post, I would like to discuss regression discontinuity (RD) and interrupted time series (ITS) designs, two strong and practical alternatives to the randomized control trial (RCT). Cool Trick: Take Advantage of Naturally Occurring Design Contexts Evaluators are often charged with …

Experiments TIG Week: Keith Zvoch on Strong Program Evaluation Design Alternatives Read More »