Hello, I am Barbara Klugman (PhD), based in South Africa, once an anti-apartheid and women’s rights activist, now providing freelance strategy and evaluation supports for social justice funders, networks and NGOs.
I work with groups engaged in organising and advocating for social or environmental justice. In this process, I have come to realise that sometimes just the term ‘evaluation’ is enough to undermine the possibility of them initiating or further institutionalizing their information gathering, reflection, learning and adaptation processes. Their experience of ‘M&E’ is the requirement created by their funders that they name, in advance, what they will do and what they will influence. This might work alright for a group running an already-established service, but it is entirely guesswork and inappropriate for groups whose effectiveness requires them to shift both protest and advocacy strategies as the broader public and political discourse shifts, and as windows of opportunity for influence open and then close. Whatever they plan, they may need to shift.
The term ‘M&E’ is associated with funders’ power and non-negotiable upward accountability, as is routine data-gathering. Yet many of these groups are profoundly reflective, undertaking research or consultations to understand their terrain and shape strategies; and engaging in the before- and after-action-reviews that support emergent learning. Indeed when running workshops on evaluation, I often argue that effective activists are built-in evaluators within complex systems. They read the terrain – the stakeholders, the diverse perspectives, the prevailing environment, and shape their strategies accordingly. After any action they ask what worked? What did not and why? What should we do differently next time? They nimbly shift strategies.
The challenge many have is that they do so in the rhythm of their activism, but once they are more than a small group, they have to be able to document their influence and to build a shared analysis within their institutions and across their networks. Having insights inside their heads and hearts or small groups is not enough. They also need the specifics of their outcomes and their contributions towards influencing them clearly documented, for cross-institutional and network learning as well as to support communications and fundraising.
To strengthen their ability to capture their stories of change and to institutionalise their reflection and learning processes, I’ve stopped using the language of M&E or MEL. I ask about their approach to strategic reflection. While the term ‘learning’ is hip among evaluators at the moment, to many of my clients it is associated with school and education; ‘strategy’ is their lingo and resonates for them.
Related to this, I’ve learnt that when hiring a staff member to support social justice groups in data-gathering, documenting and making sense of their efforts, they need to be wary of applicants whose only experience in ‘M&E’ is checklist monitoring of compliance to contracts for funder-supported service provision, where data is not used for evaluation. They should rather seek someone who has experience in activism and advocacy with training in social or political theory, who will bring to bear the principle of collective action, and an evaluative lens.
- On fostering emergent learning, see: Darling et al 2016 Emergent Learning: A framework for whole-system strategy, learning and adaptation.
- On shifting funders’ approaches to accountability, see Taylor, A., & Liadsky, B. (2018) Achieving Greater Impact by Starting with Learning , Taylor Newberry Consulting; and Honig, D. (2020). Actually Navigating by Judgment: Towards a new paradigm of donor accountability where the current system doesn’t work. Policy Paper 169, Centre for Global Development.
The American Evaluation Association is hosting Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to AEA365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.
4 thoughts on “OL-ECB TIG Week: Must We Call It ‘Evaluation’? – How ‘M&E’ Language Can be a Barrier to Institutionalising Learning by Barbara Klugman”
Thank you for your thoughts, Barbara. Language has power! It’s fascinating how connotation of the same words changes from place to place, person to person, culture to culture. As you’ve noted, as educators we must take care to factor in the variety of contexts in our classrooms, organizations, or wherever we are working. This is compounded based on the emotional weight of whatever we are dealing with; care in how we communicate is paramount. In the high school in which I teach in BC, Canada, we have put a lot of time and effort into remove the stigma around ‘learning’. Some students are fine with the idea; others consider it a judgement that one doesn’t already ‘know’ and has to learn. I wonder how much of this is culture based – we tend to celebrate those who ‘did’ over those who are ‘doing’.
I agree that the term ‘M&E’ is associated with funders’ power and non-negotiable upward accountability, as is routine data-gathering. This thinking limits learning. For example, if from practitioner’s experience, new knowledge is generated, new skills, new understanding, new approaches are discovered better than what is being used, it is very difficult to change the already established outcomes and outputs. Learning needs to be an integral part of interventions because interventions it gives room for improvement based on practice.
We’ll said Barbara. What you say resonates with my experiences embedded in organisations or initiatives as part-time ‘MEL / M&E advisor’ or similar, and as an evaluator consultant and trainer. In my embedded roles I have fostered the use ‘reflection spaces’ and, when a growing organisation showed interest in developing its knowledge management more deliberately I saw that as an opportunity to better integrate what we know as MEL processes into the cut and thrust of evidence gathering and it’s uses. On the other hand, I’ve found that amongst NGOs there are some with well developed ‘learning cultures’ such that the term ‘evaluation’ and even ‘evaluative thinking’ are widely understood to go far beyond what you rightly describe as the check box process designed to answer to funders. However, these may be relatively rare organisations so I agree alternatives to the term ‘evaluation’ will hit home better for many. But then again, obligations and institutional acceptance of the need for evaluations can be a spring board to doing meaningful, reflective exercises, so the term can be useful!
Thank you for this reflections. Often times as MEL practioner’s ( Only using this now) we don’t realize that our own biases may affect the outcome of such processes. We should learn to check our privilege’s and power as we undertake these kind of initiatives. How about adoption a feminist approach to MEL? Language is key in all that we say, write and do.