Happy 2023, AEA365 readers! Liz DiLuzio here, Lead Curator of the blog. In the spirit of looking back before we move forward, this week of posts is a tribute to the seven blogs that resonated the most among our readership in 2022 as determined by the number of shares. accumulated. Today’s post had 232 shares across socials.
Hello, I am Barbara Klugman (PhD), based in South Africa, once an anti-apartheid and women’s rights activist, now providing freelance strategy and evaluation supports for social justice funders, networks and NGOs.
I work with groups engaged in organising and advocating for social or environmental justice. In this process, I have come to realise that sometimes just the term ‘evaluation’ is enough to undermine the possibility of them initiating or further institutionalizing their information gathering, reflection, learning and adaptation processes. Their experience of ‘M&E’ is the requirement created by their funders that they name, in advance, what they will do and what they will influence. This might work alright for a group running an already-established service, but it is entirely guesswork and inappropriate for groups whose effectiveness requires them to shift both protest and advocacy strategies as the broader public and political discourse shifts, and as windows of opportunity for influence open and then close. Whatever they plan, they may need to shift.
The term ‘M&E’ is associated with funders’ power and non-negotiable upward accountability, as is routine data-gathering. Yet many of these groups are profoundly reflective, undertaking research or consultations to understand their terrain and shape strategies; and engaging in the before- and after-action-reviews that support emergent learning. Indeed when running workshops on evaluation, I often argue that effective activists are built-in evaluators within complex systems. They read the terrain – the stakeholders, the diverse perspectives, the prevailing environment, and shape their strategies accordingly. After any action they ask what worked? What did not and why? What should we do differently next time? They nimbly shift strategies.
The challenge many have is that they do so in the rhythm of their activism, but once they are more than a small group, they have to be able to document their influence and to build a shared analysis within their institutions and across their networks. Having insights inside their heads and hearts or small groups is not enough. They also need the specifics of their outcomes and their contributions towards influencing them clearly documented, for cross-institutional and network learning as well as to support communications and fundraising.
To strengthen their ability to capture their stories of change and to institutionalise their reflection and learning processes, I’ve stopped using the language of M&E or MEL. I ask about their approach to strategic reflection. While the term ‘learning’ is hip among evaluators at the moment, to many of my clients it is associated with school and education; ‘strategy’ is their lingo and resonates for them.
Related to this, I’ve learnt that when hiring a staff member to support social justice groups in data-gathering, documenting and making sense of their efforts, they need to be wary of applicants whose only experience in ‘M&E’ is checklist monitoring of compliance to contracts for funder-supported service provision, where data is not used for evaluation. They should rather seek someone who has experience in activism and advocacy with training in social or political theory, who will bring to bear the principle of collective action, and an evaluative lens.
- On fostering emergent learning, see: Darling et al 2016 Emergent Learning: A framework for whole-system strategy, learning and adaptation.
- On shifting funders’ approaches to accountability, see Taylor, A., & Liadsky, B. (2018) Achieving Greater Impact by Starting with Learning , Taylor Newberry Consulting; and Honig, D. (2020). Actually Navigating by Judgment: Towards a new paradigm of donor accountability where the current system doesn’t work. Policy Paper 169, Centre for Global Development.
The American Evaluation Association is hosting Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to AEA365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.
2 thoughts on “OL-ECB TIG Week: Must We Call It ‘Evaluation’? – How ‘M&E’ Language Can be a Barrier to Institutionalising Learning by Barbara Klugman”
Thanks for sharing! I love the idea of evaluation as a catalyst for advocacy.
Thanks for this reflection. Loved it.
The language really gets in the way, also on the “commissioning” side. M&E language is resulting in stale practices: an “evaluation” is about calling a consultant and getting a report. It is very hard to shift this perception, and to use better processes. And when you manage to do so, it is often an epiphany for commissioners: they discover that the resources they have for “evaluation” can be put to a much better use than producing just a report for donors.
The challenge is that many useful words are now cooped by this bureaucratic lingo. As you pointed, we need to use different words to describe what is effectively M&E, evaluation, data collection… But the sad twist is that, when we use different words, they are understood by the bureaucracy as different processes. And they are hard to be resource and recognized. The paradox? There will be money for a conventional evaluation (of limited use). But it is hard to use the same money for ongoing evaluative rich processes, which can really improve quality of action.