Hello! I’m Laura Budzyna, and I’m writing from England, where I’ve just had the pleasure of spending a week as guest faculty for the five-day Oxford Impact Measurement Programme, under the visionary leadership of AEA SIM TIG co-founder Karim Harji.
The 42 participants were all grappling with impact measurement (IM) from very different vantage points: philanthropy, impact investing, corporate sustainability, and more. Each was an ambassador for IM within their own organization – many had been charged with building out nascent “impact” units – and each was here to refine their toolkit and learn how to “do better.”
Critically, the course did not champion any particular perspective, standard, or technique, but rather explored the diversity of approaches that have emerged as different fields evolve and converge. Moreover, the course made clear that every approach, from developmental evaluation to Lean Data to IRIS+, grew out of a specific context and serves a particular purpose. The course challenged all actors, “old” and “new,” to question their assumptions about which approach is “best,” and instead to start with the question at hand.
In other words, we should begin not by asking how to measure, but why and for whom, anchoring the measurement approach in people and purpose.
I had the delightful task of kicking off each morning with an illustration of the day’s theme and a discussion prompt. See below for a window into the first three days!
Day 1: Why measure? For whom?
Day 2: What to measure? Who measures?
Day 3: How and when to measure?
We hoped to create an atmosphere where actors from the public, private, and non-profit sectors could compare notes, challenge perspectives, and come away with concrete ideas and solutions. A few things created the enabling environment for this exchange:
- Recruit participants – and speakers – from a diverse range of backgrounds: The folks in the room hailed from private equity, ESG, family offices, corporations, startups, accelerators, standard setters, international NGOs, and beyond. The faculty team – Marcus Bleasdale, Emilie Goodall, Penny Hawkins, Heather Krause and myself – had each worked across multiple sectors, and between us could speak to most of the contexts in the room.
- Make the implicit explicit: With 42 participants come 42 different perspectives on how one “should” measure impact – and we wanted to make that clear from the start. We kicked off the week with a “human histogram” activity, where participants physically “took a stand” on different opposing statements (I.e. “IM should always strive for standardization” vs. “IM should always be tailored to the organization.”), then challenged them to think about how their organizational contexts had affected their stances.
- Get practical: In small group sessions, each participant shared an IM challenge and invited others to brainstorm solutions: the out-of-the-box ideas reminded us of the value of bringing non-evaluators to the evaluation table. At a peer-to-peer showcase, participants shared an IM tool they had developed (a theory of change, an impact scoring tool) in rotating small groups. This illustrated not only the deep experience in the room, but also our dynamic IM field in action – the genuine effort to try something, iterate, and try again.
Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.