Hello. I’m Chris Stalker of Oxfam America with the second of a two-part series on policy advocacy evaluation. Yesterday, I explained that in almost 30 years working in advocacy and campaigning I’ve never felt less confident about ‘how change happens’ than I do now, and that is why we proposed an AEA Conference session on this issue where foundations, southern CSOs and INGOs can share some of their perspectives and ideas.
Lesson Learned:
There are, nevertheless, some good practice tested assumptions to hold onto. As evaluators of outcomes and impact – and despite the turbulent, disruptive operating environment – it remains our job to seek to understand people’s experiences of change and to consider what has, or hasn’t, happened, and why, and why not.
We know that this can be a complicated and complex reality, usually involving a range of perspectives, and sometimes producing contested findings. Because of this, it is the evaluator’s responsibility to use integrity and try and accurately reflect this in their evaluation, and to give voice to all stakeholders and their views on what constitutes ‘facts’, including the views of groups that are seldom heard.
Advocacy evaluation has always been about building an aggregated and balanced sense of different perspectives and should be seen as understanding tested experiences – a craft requiring shrewd judgment and implied knowledge – rather than as a quasi-scientific methodology.
Evaluators and campaign and advocacy practitioners can inhabit such very different spaces sometimes even a mutual incomprehension exists. One concern we often hear is that the discourse around professional evaluation (MEL) can risk creating exclusion out of the notion of evaluation that, at its worst, can make it alienating to practitioners.
One consequence of this is that is even more important to close the space between evaluators and advocacy practitioners. The lived experience is that evaluation is too often seen as burdensome and of limited direct benefit, in part because it is disconnected from both stakeholders’ information needs and wider organisational and sectoral processes.
Rather than positioning evaluation as a parallel and separate entity, the approach we need now should be to develop reflective, learning and evaluation strategies that are accompanied, embedded, participatory and generate knowledge that is relevant, timely and is used.
Experience tells us that learning best serves advocacy when its analysis and tools are in the hands of both advocates and evaluators working collaboratively.
The turbulent, disruptive context means, more important than ever, we now need to continue to communicate a picture of what this looks like and persuade people of the added value that we bring to practitioners. Maybe we’ll then see more evaluations for learning and evidence-informed adjustments and, ultimately, a focus on assessing effectiveness, transformative change and contribution to impact.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Really appreciated you articulating this struggle. I have been thinking about this in terms of “modernism” and “post-modernism” and the place of evaluation within each of these schools/orientations. Evaluation has to be clearer than ever about its relationship to these underlying assumptions.