AEA365 | A Tip-a-Day by and for Evaluators

Dec/16

14

Experiments TIG Week: William Faulkner on A Righteous Cover-up: Behind the Scenes of the Most Famous “Experiment” in International Development History

Hello! I’m William Faulkner, Director of Flux, an M&E consultancy based in New Orleans. I want to pull back the curtain on perhaps the most famous experiment in international development history – the one conducted by Washington DC’s IFPRI on Mexico’s largest anti-poverty program, PROGRESA (now Prospera).

 The Down-Low:

Basically, the mainstream narrative of this evaluation ignores three things:

  • Randomization: This evaluation did not randomly assign households to treatment and control status, but only leveraged randomization. The “clustered matched-pairs design” involved non-randomly assigning participating communities first to treatment and control status.
  • Attrition: Selective sample attrition was strongly present and unaccounted for in analyses.
  • Contamination: Treatment communities were doubtless ‘contaminated’ with migrants from control communities. They even decided to end the project early because of pressure from local authorities in control communities.

The project that “proved” that experiments could be rigorously applied in developing country contexts was neither experimental nor rigorous. In other words, a blind eye was turned to the project’s pretty severe internal weaknesses. There was an enormous and delicate opportunity to de-politicize the image of social programming at the national level, put wind in the sails of conditional cash transfers, bolster the credibility of evidence-based policy worldwide, and sustain the direct flow of cash to poor Mexicans.

So What? (Lessons Learned):

What does this case illuminate about experiments?

Let’s leave the shouting behind and get down to brass tacks. The “experiment-as-gold-standard” agenda still commands a significant swath of the networks which commission and undertake evaluations. Claims for methodological pluralism, however, are neither new nor in need of immediate defense. Instead, M&E professionals should systematically target and correct the overzealous representations of experiments, rather than getting bogged down in theoretical discussions about what experiments can and cannot do.

Still, in 2016, we have breathless, infomercial-like articles on experiments coming out in the New York Times. This has to stop. Still, we absolutely must respect the admirable achievement of the randomistas: the fact that there’s a space for this fascinatingly influential methodology in M&E where previously none existed.

The individuality of each evaluation project makes talking about ‘experiments’ as a whole difficult. These things are neither packaged nor predictable. As with IFPRI-Progresa, micro-decisions matter. Context matters. History matters. This case is an ideal centerpiece with which to induce a grounded, fruitful discussion of the rewards and risks of experimental evaluation.

Rad Resources:

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

2 comments

  • Tom Archibald · December 15, 2016 at 7:29 am

    One point of disagreement, though, regarding this: “the fact that there’s a space for this fascinatingly influential methodology in M&E where previously none existed.” In my understanding of evaluation history, experiments were there from the beginning, perhaps as the most common and dominant approach. It took decades for qualitative, participatory etc. approaches to enter the picture and be accepted. Then, around 2000, the randomistas struck back through US gov’t re-positioning the RCT as gold standard. Since then, the pendulum has simply been swinging back to a logical and reasoned position of both/and.

    Reply

  • Tom Archibald · December 15, 2016 at 6:16 am

    This wins the aea365 of the year award for me. Will makes some very important points about the fallibility of the “experiment-as-gold-standard” agenda, and shares some great resources. Thanks, Will!

    Reply

Leave a Reply

<<

>>

Archives

To top