AEA365 | A Tip-a-Day by and for Evaluators

TAG | outcome harvesting

I’m Marti Frank, a researcher and evaluator based in Portland, Oregon. Over the last three years I’ve worked in the energy efficiency and social justice worlds, and it’s given me the opportunity to see how much these fields have to teach one another.

For evaluators working with environmental programs – and energy efficiency in particular – I’ve learned two lessons that will help us do a better job documenting the impacts of environmental programs.

Lessons Learned:

1) A program designed to address an environmental goal – for example, reduce energy use or clean up pollution, will almost always have other, more far reaching impacts. As evaluators, we need to be open to these in order to capture the full range of the program’s benefits.

Example: A weatherization workshop run by Portland non-profit Community Energy Project (where I am on the Board), teaches people how to make simple, inexpensive changes to their home to reduce drafts and air leaks. While the program’s goal is to reduce energy use, participants report many other benefits: more disposable income, reduced need for public assistance, feeling less worried about paying bills, having more time to spend with family.

2) Not all people will be equally impacted by an environmental program, or even impacted in the same way. Further, there may be systematic differences in how, and how much, people are impacted.

Example #1: Energy efficiency programs assign a single value for energy savings, even though the same quantity of savings will mean very different things to different households, depending in large part on their energy burden  (or the percent of their income they spend on energy).

Example #2: A California energy efficiency program provided rebates on efficient household appliances, like refrigerators. Although the rebates were available to everyone, the households who redeemed them (and thus benefited from the program) were disproportionately wealthy and college-educated, relative to all Californians.

Rad Resources:

I’ve found three evaluation approaches to be helpful in identifying unintended impacts of environmental programs.

Outcome harvesting. This evaluation practice encourages us to look for all program outcomes, not just those that were intended. Ricardo Wilson-Grau, who developed it, hosts this site with materials to get you started.

Intersectionality. This conceptual approach originated in feminist theory and reminds us to think about how differing clusters of demographic characteristics influence how we experience the world and perceive benefits of social programs.

Open-ended qualitative interviews. It’s hard to imagine unearthing unexpected outcomes using closed-ended questions. I always enjoy what I learn from asking open-ended questions, giving people plenty of time to respond, and even staying quiet a little too long. And, I’ve yet to find an interviewee who doesn’t come up with another interesting point when asked, “Anything else?”

The American Evaluation Association is celebrating Environmental Program Evaluation TIG Week with our colleagues in the Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

I am Ricardo Wilson-Grau, an evaluator based in Rio de Janeiro but working internationally. Increasingly, I am called upon to serve as a developmental evaluator, I have found the concept of “inquiry framework” (Chapter 8 in Developmental Evaluation[1]) to be invaluable for co-creating developmental evaluation questions and agreeing how they will be answered. As Michael Quinn Patton says: “Matching evaluation questions to particular situations is the central challenge in developmental evaluation’s situational responsiveness and adaptability…”.[2]

DE does not rely on any particular inquiry framework, just as its toolbox is open to a diversity of designs, methods and tools. What is appropriate depends on the innovation challenges a project, program or organization faces at a given point in time. For example, with one client I used a complexity inquiry framework to support the two-month design of a regional peace-building initiative in a continent with a track record of failures in similar attempts. Then, we considered these potential frameworks to support the first stage of implementation: a) Driving innovation with principles, b) Focusing on systems change, c) Fomenting collaboration for innovation, d) Confronting wicked problems and e) Outcome Harvesting.

In the light of the nature of the developmental challenge this emerging initiative faced, there were sound reasons for using one or more or a combination of these frameworks. The client’s most pressing immediate need, however, was to know in as real time as possible what observable and verifiable changes it was influencing in actors who could not be predetermined. Thus, they choose Outcome Harvesting.

Hot Tip: Are you are in a situation of social innovation that aims to influence changes in behavior writ large — from change in individual actions to organizational or institutional changes of policies or practices? Do you need concrete evidence of those achievements as they happen, along with an understanding of whether and how the innovative efforts contributed to those changes? If yes and yes, Outcome Harvesting may be a useful inquiry framework for you.

Rad Resources: In this video I explain in less than three minutes the Outcome Harvesting tool. There you will also find further information.

You can obtain more information about Outcome Harvesting at Better Evaluation.

To explore using the tool with a client, consider this animated PowerPoint slide to support you in operationalizing the iterative six Outcome Harvesting steps.

[1] For more on developmental inquiry frameworks, see Michael Quinn Patton, Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use, Guilford, 2011, Chapter 8.

[2] Ibid, pages 227-228.

The American Evaluation Association is celebrating Developmental Evaluation Week. The contributions all this week to aea365 come from evaluators who do developmental evaluation. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am Ricardo Wilson-Grau an evaluator based in Rio de Janeiro but working internationally. Over the past 9 years, co-evaluators and I have developed the Outcome Harvesting tool while performing two dozen developmental, formative and summative evaluations. Half of these evaluations were of international social change networks. The other half of the evaluations were of the programmes of international development funding agencies.

The two dozen evaluands had in common that they did not have plans that could be conventionally evaluated because the original definition of what they aimed to achieve, and what they would do to achieve it, were either not sufficiently specific and measurable to compare and contrast what was planned with what was done and achieved, or they had to cope with dynamic, uncertain circumstances. In part this complexity was explained because all were attempting to influence changes in the behaviour of social actors over whom they had no control in order to make progress towards improvements in people’s lives, the conditions of society or in the state of the environment.

An August 2013 discussion paper from the United Nations Development Program summarized: Outcome Harvesting is “an evaluation approach that — unlike some evaluation methods — does not measure progress towards predetermined outcomes, but rather collects evidence of what has been achieved, and works backward to determine whether and how the project or intervention contributed to the change”.

Hot Tip: You can periodically demonstrate and be accountable for concrete, verifiable and significant results, negative as well as positive, of your work even if the outcomes were not planned, are unintended, and your contribution as been one amongst that of others and direct or indirect.

One instrument that will support you: see the Outcome Harvesting Brief.

Rad Resources:

Here are links to three diverse examples of Outcome Harvesting use:

The summative evaluation of the Oxfam Novib’s €22 million program to support 38 grantees working on sustainable livelihoods and social and political participation documents outcomes from 111 countries

report on the evaluation experience of identifying and documenting 200 emergent outcomes of the BioNET global network.

After ten World Bank Institute teams piloted a customized version of Outcome Harvesting last year, in June 2014 the WB published a booklet of the cases and now lists the tool amongst its resources for monitoring and evaluation.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from theAmerican Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Ricardo? He’ll be presenting as part of the Evaluation 2014Conference Program, October 15-18 in Denver, Colorado.

·

Archives

To top