AEA365 | A Tip-a-Day by and for Evaluators

TAG | impact evaluation

Hello! I’m William Faulkner, Director of Flux, an M&E consultancy based in New Orleans. I want to pull back the curtain on perhaps the most famous experiment in international development history – the one conducted by Washington DC’s IFPRI on Mexico’s largest anti-poverty program, PROGRESA (now Prospera).

 The Down-Low:

Basically, the mainstream narrative of this evaluation ignores three things:

  • Randomization: This evaluation did not randomly assign households to treatment and control status, but only leveraged randomization. The “clustered matched-pairs design” involved non-randomly assigning participating communities first to treatment and control status.
  • Attrition: Selective sample attrition was strongly present and unaccounted for in analyses.
  • Contamination: Treatment communities were doubtless ‘contaminated’ with migrants from control communities. They even decided to end the project early because of pressure from local authorities in control communities.

The project that “proved” that experiments could be rigorously applied in developing country contexts was neither experimental nor rigorous. In other words, a blind eye was turned to the project’s pretty severe internal weaknesses. There was an enormous and delicate opportunity to de-politicize the image of social programming at the national level, put wind in the sails of conditional cash transfers, bolster the credibility of evidence-based policy worldwide, and sustain the direct flow of cash to poor Mexicans.

So What? (Lessons Learned):

What does this case illuminate about experiments?

Let’s leave the shouting behind and get down to brass tacks. The “experiment-as-gold-standard” agenda still commands a significant swath of the networks which commission and undertake evaluations. Claims for methodological pluralism, however, are neither new nor in need of immediate defense. Instead, M&E professionals should systematically target and correct the overzealous representations of experiments, rather than getting bogged down in theoretical discussions about what experiments can and cannot do.

Still, in 2016, we have breathless, infomercial-like articles on experiments coming out in the New York Times. This has to stop. Still, we absolutely must respect the admirable achievement of the randomistas: the fact that there’s a space for this fascinatingly influential methodology in M&E where previously none existed.

The individuality of each evaluation project makes talking about ‘experiments’ as a whole difficult. These things are neither packaged nor predictable. As with IFPRI-Progresa, micro-decisions matter. Context matters. History matters. This case is an ideal centerpiece with which to induce a grounded, fruitful discussion of the rewards and risks of experimental evaluation.

Rad Resources:

The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello! We are Dana Linnell Wanzer, evaluation doctoral student, and Tiffany Berry, research associate professor, from Claremont Graduate University. Today we are going to discuss why you should measure participants’ motivation for joining or continuing to attend a program.

Sometimes, randomization in our impact evaluations is not possible. When this happens, there are issues of self-selection bias that can complicate interpretations of results. To help identify and reduce these biases, we have begun to measure why youth initially join programs and why they continue participating. The reason participants’ join a program is a simple yet powerful indicator that can partially account for self-selection biases while also explaining differences in student outcomes.

Hot Tip: In our youth development evaluations, we have identified seven main reasons youth join the program. We generally categorize these students into one of three groups: (1) students who join because they wanted to (internally motivated), (2) students who join because someone else want them to be there (externally motivated), or (3) students who report they had nothing better to do. As an example, the following displays the percentage of middle school students who joined a local afterschool enrichment program:

berry

Hot Tip: Using this “reason to join” variable, we have found that internally motivated participants are more engaged, rate their program experiences better, and achieve greater academic and socioemotional outcomes than externally motivated participants. Essentially, at baseline, internally motivated students outperform externally motivated students and those differences remain across time.

Lesson Learned: Some participants change their motivation over the course of the program (see table below). We’ve found that participants may begin externally motivated, but then choose to continue in the program for internal reasons. These students who switch from external to internal have outcome trajectories that look similar to students who remain internally motivated from the start. Our current work is examining why participants switch, what personal and contextual factors are responsible for switching motivations, and how programs can transform students’ motivational orientations from external to internal.

berry-2

Rad Resource: Tiffany Berry and Katherine LaVelle wrote an article on “Comparing Socioemotional Outcomes for Early Adolescents Who Join After School for Internal or External Reasons

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Adrienne Zell and I work as an internal evaluator for the Oregon Clinical and Translational Research Institute, an organization that provides services to biomedical researchers at Oregon Health and Science University. I also volunteer as the executive director of a small nonprofit, Impactivism, which provides evaluation advice to community organizations. I have been a member of OPEN for over 10 years, and involved with the events committee for the past three years.

Many years ago, I was lent a collection of essays entitled, Yoga for People Who Can’t Be Bothered to Do It. Geoff Dyer’s essays are first-rate — humorous, amorous, and reflective – but it is his brilliant title that has stuck with me. Although OPEN members and volunteers have diverse roles within the field of evaluation, a common theme in our events and conversations has been the effort involved in convincing organizational leadership, staff, and stakeholders that evaluation is worth doing and that they should have a direct role.

This past year, one of our members, Chari Smith, successfully organized an OPEN event and a conference workshop designed to planfully connect evaluators and nonprofit staff and engage them in thinking about reasons why organizations may not “do” evaluation. As evaluators, we rarely can remove all identified barriers. But we can work to understand their complexity and re-focus on opportunities. Participation in OPEN, along with my experience as both an external and an internal evaluator, has inspired a list of tips on addressing evaluation gridlock in organizations and just “doing” it.

Hot Tip #1: Highlight current capacity. Most organizations are already practicing evaluation; they just aren’t using the term. They may collect data on clients, distribute feedback forms, maintain resource guides, or engage in other evaluation-related activities. Identifying and leveraging current accomplishments inspires confidence and makes evaluation seem less forbidding.

Hot Tip #2: Appeal to accountability. Program leaders, by definition, should be held accountable for program impact. The most recent issue of New Directions for Evaluation compares and contrasts the fields of performance management and evaluation. Program managers should regularly request and utilize both kinds of information when making decisions. Elements of these comparisons can be shared with program leadership, increasing understanding about the differences, commonalities, and benefits.

Hot Tip #3: Show them the money. Provide examples of how rigorous impact evaluation can result in stronger grant applications and increased funding. A recent EvalTalk post solicited such an example, and members were responsive. In addition, return on investment (ROI) and other cost analyses (see tomorrow’s post by OPEN member Kelly Smith) can demonstrate savings, inform resource allocation, and target areas for future investment.  A single ROI figure can “go viral” and motivate further evaluation work.

Clipped from http://www.ohsu.edu/xd/research/centers-institutes/octri/index.cfm

The American Evaluation Association is celebrating Oregon Program Evaluators Network (OPEN) Affiliate Week. The contributions all this week to aea365 come from OPEN members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Archives

To top