Hello – we’re Claire Hutchings and Kimberly Bowman, working with Oxfam Great Britain (GB) on Monitoring, Evaluation and Learning of Advocacy and Campaigns. We’re writing today to share with you Oxfam GB’s efforts to adopt a rigorous approach to advocacy impact evaluation and to ask you to help us strengthen our approach.
Rad Resources Resources:
- Oxfam GB’s “Process Tracing” Research Protocol (draft)
- Effectiveness Review, Influencing of Public Policy and Management, Bolivia
As part of Oxfam GB’s new Global Performance Framework, each year we randomly select and evaluate a sample of mature projects. Project evaluations that don’t lend themselves to statistical approaches, such as policy-change projects, are particularly challenging. Here, we have developed an evaluation protocol based on a qualitative research methodology known as process-tracing. The protocol attempts to get at the question of effectiveness in two ways: by seeking evidence that can link the intervention in question to any observed outcome-level change; and also by seeking evidence for alternative “causal stories” of change in order to understand the significance of any contributions the intervention made to the desired change(s). Recognizing the risks of oversimplification and/ or distortion, we are also experimenting with the use a of simple (1-5) scale to summarize the findings.
Lessons Learned (and continuing challenges!):
- As a theory based evaluation methodology, process tracing involves understanding the Theory of Change underpinning the project/campaign, but this is rarely explicit – and can take time to pull out.
- It’s difficult (and important) to Identify ‘the right’ interim outcomes to focus on. They shouldn’t be very close in time and type to the intervention; that could make the evaluation superfluous. Nor should the outcomes be so far down the theory of change that they can‘t realistically occur or be linked causally to the intervention within the evaluation period.
- In the absence of a “signature” – something that unequivocally supports one hypothesized cause – what constitutes credible evidence of the intervention’s contribution to policy change? Can we overcome the charge of (positive) bias so often leveled at qualitative research?
And of course, all this coupled with the very practical implementation challenges! The bottom line: like all credible impact evaluations, it takes time, resources, and expertise to do these well. We have to balance real resource and time constraints with our desire for quality and rigor.
As we near the end of our second year working with this protocol, we are looking to review, refine, and strengthen our approach to advocacy evaluation. We would welcome your inputs! Please use the comments function below or blog about the issue to share your experience and insights, “top tips” or “rad resources.” Or email us directly.
The American Evaluation Association is celebrating Advocacy and Policy Change (APC) TIG Week with our colleagues in the APC Topical Interest Group. The contributions all this week to aea365 come from our APC TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Pingback: APC TIG Week: Carlisle Levine on Using Contribution Analysis to Explore Causal Relationships in Advocacy and Policy Change · AEA365
Pingback: i’ve been blogging | kimberly bowman