AEA365 | A Tip-a-Day by and for Evaluators

TAG | outcome

Greetings from Ian Patrick and Anne Markiewicz, in Melbourne, Australia – evaluators active in evaluation design, implementation and training for a range of domestic and international clients. We’ve been reflecting on a tortured area of evaluation practice – that being expectations frequently placed on evaluators to identify the IMPACT of a program.

Every evaluator breathes a sigh of relief when their clients or stakeholders are knowledgeable about evaluation and hold reasonable expectations about what it can and can’t do. But how many evaluators instead have felt the heavy weight of expectations to establish high level results demonstrating a program has make a big difference to a region, country or the world! Or in a related scenario, an eagerness to establish longer term results from a program which has only been operating for a limited duration! Other unrealistic expectations can include adopting a program-centric focus which sees all results as attributable to the program, minimizing the contribution of stakeholders and partners to change. Or in another context, adopting a limited lens on the perceived value of different types of results.

Such situations call for cool-headedness and a calm educative approach from the evaluator. Where possible, the evaluator has much to gain from open discussion and exchange of views, tempering unrealistic aspirations and negotiating realistic expectations from an evaluation. Here are some of the strategies that we have found productive in such contexts:

HOT TIPS:

Reflect on Impact: As an upfront strategy, become clear with clients/stakeholders about what is meant by ‘impact’. Be aware, that the term is used loosely, and lazily, often to support sweeping expectations. Introduce other helpful terminology to identify and demarcate different categories of results such as intermediate outcomes. A sense of realism in discussions may well clarify that these types of results can be realistically identified within the program time frame. Intermediate results, once identified and understood are often highly valued, and stand in contrast to more elusive, longer term impact.

Decompress Time: Proactively address a tendency for time factors associated with a program’s results to become compressed. A fixation on end states can obscure the important intermediary stages through which change evolves. Utilisation of program theory and program logic approaches can provide a means to identify expected changes over realistic time frames.

Remember Others: Resist a tendency for change to be unilaterally attributed to a program. Recognise and focus on the contribution made by related stakeholders/partners to change.

Adopt Pluralist Approaches: promote application of various perspectives and ways of identifying and measuring change rather than using a single method. Use of mixed methods approaches will promote a more subtle and nuanced view of change, particularly how it manifests and is experienced during a program’s life cycle.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Ian and Anne? They’ll be presenting as part of the Evaluation 2014 Conference Program, October 15-18 in Denver, Colorado.

·

I am Ricardo Wilson-Grau an evaluator based in Rio de Janeiro but working internationally. Over the past 9 years, co-evaluators and I have developed the Outcome Harvesting tool while performing two dozen developmental, formative and summative evaluations. Half of these evaluations were of international social change networks. The other half of the evaluations were of the programmes of international development funding agencies.

The two dozen evaluands had in common that they did not have plans that could be conventionally evaluated because the original definition of what they aimed to achieve, and what they would do to achieve it, were either not sufficiently specific and measurable to compare and contrast what was planned with what was done and achieved, or they had to cope with dynamic, uncertain circumstances. In part this complexity was explained because all were attempting to influence changes in the behaviour of social actors over whom they had no control in order to make progress towards improvements in people’s lives, the conditions of society or in the state of the environment.

An August 2013 discussion paper from the United Nations Development Program summarized: Outcome Harvesting is “an evaluation approach that — unlike some evaluation methods — does not measure progress towards predetermined outcomes, but rather collects evidence of what has been achieved, and works backward to determine whether and how the project or intervention contributed to the change”.

Hot Tip: You can periodically demonstrate and be accountable for concrete, verifiable and significant results, negative as well as positive, of your work even if the outcomes were not planned, are unintended, and your contribution as been one amongst that of others and direct or indirect.

One instrument that will support you: see the Outcome Harvesting Brief.

Rad Resources:

Here are links to three diverse examples of Outcome Harvesting use:

The summative evaluation of the Oxfam Novib’s €22 million program to support 38 grantees working on sustainable livelihoods and social and political participation documents outcomes from 111 countries

report on the evaluation experience of identifying and documenting 200 emergent outcomes of the BioNET global network.

After ten World Bank Institute teams piloted a customized version of Outcome Harvesting last year, in June 2014 the WB published a booklet of the cases and now lists the tool amongst its resources for monitoring and evaluation.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from theAmerican Evaluation Association. Please consider contributing – send a note of interest to aea365@eval.org. Want to learn more from Ricardo? He’ll be presenting as part of the Evaluation 2014Conference Program, October 15-18 in Denver, Colorado.

·

Archives

To top