Hello! I’m William Faulkner, Director of Flux, an M&E consultancy based in New Orleans. I want to pull back the curtain on perhaps the most famous experiment in international development history – the one conducted by Washington DC’s IFPRI on Mexico’s largest anti-poverty program, PROGRESA (now Prospera).
Basically, the mainstream narrative of this evaluation ignores three things:
- Randomization: This evaluation did not randomly assign households to treatment and control status, but only leveraged randomization. The “clustered matched-pairs design” involved non-randomly assigning participating communities first to treatment and control status.
- Attrition: Selective sample attrition was strongly present and unaccounted for in analyses.
- Contamination: Treatment communities were doubtless ‘contaminated’ with migrants from control communities. They even decided to end the project early because of pressure from local authorities in control communities.
The project that “proved” that experiments could be rigorously applied in developing country contexts was neither experimental nor rigorous. In other words, a blind eye was turned to the project’s pretty severe internal weaknesses. There was an enormous and delicate opportunity to de-politicize the image of social programming at the national level, put wind in the sails of conditional cash transfers, bolster the credibility of evidence-based policy worldwide, and sustain the direct flow of cash to poor Mexicans.
So What? (Lessons Learned):
What does this case illuminate about experiments?
Let’s leave the shouting behind and get down to brass tacks. The “experiment-as-gold-standard” agenda still commands a significant swath of the networks which commission and undertake evaluations. Claims for methodological pluralism, however, are neither new nor in need of immediate defense. Instead, M&E professionals should systematically target and correct the overzealous representations of experiments, rather than getting bogged down in theoretical discussions about what experiments can and cannot do.
Still, in 2016, we have breathless, infomercial-like articles on experiments coming out in the New York Times. This has to stop. Still, we absolutely must respect the admirable achievement of the randomistas: the fact that there’s a space for this fascinatingly influential methodology in M&E where previously none existed.
The individuality of each evaluation project makes talking about ‘experiments’ as a whole difficult. These things are neither packaged nor predictable. As with IFPRI-Progresa, micro-decisions matter. Context matters. History matters. This case is an ideal centerpiece with which to induce a grounded, fruitful discussion of the rewards and risks of experimental evaluation.
- A solid overview of where experiments sit in the overall landscape of impact evaluation methods: Broadening the Range of Designs and Methods for Impact Evaluations.
- An excellent lecture by Harvard professor Lant Pritchett on the powers & pitfalls of experiments.
The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.