Kia ora koutou katoa, I’m Julian King, a public policy consultant from Aotearoa New Zealand. Recently I’ve been working with Africa’s network of Financial Sector Deepening (FSD) organizations, to provide evaluation capacity building for a new approach to evaluate value for money (VFM).
This approach to VFM is built on a foundation of evaluative reasoning, using rubrics that were co-developed with stakeholders. Oxford Policy Management and I worked with the FSDs to develop the VFM framework. This framework is exciting because it provides FSDs with an approach that enables each organization to evaluate its own VFM and tell its own performance story, in a transparent and credible way. This not only fulfils accountability requirements to donors, but enables FSDs to use VFM assessment for reflection, learning and improvement.
Lessons Learnt: A few months ago I caught up with David Fetterman and attended his keynote at the Australian Evaluation Society Conference in Sydney. I had a lightbulb moment: our collaboration with the FSD network is aligned with Empowerment Evaluation – an approach that uses evaluation concepts, techniques and findings to foster engagement and self-determination. Each FSD has its own monitoring and results measurement (MRM) team, in charge of evaluating FSD performance and VFM. I am at the service of the MRM teams, sharing approaches and tools for VFM assessment, influencing enhancements to data collection systems, promoting evaluative thinking about VFM, and providing an independent check on VFM reports. I’m encouraging the FSD network to develop a community of practice around VFM, supporting each other to build a culture of evidence and evaluative thinking through cycles of reflection and action.
Hot Tips: You can evaluate VFM using familiar evaluation theories, methods and tools. If you understand evaluative reasoning and mixed methods, you’re already well on your way to being a VFM evaluator.
A rubric is more than a matrix of criteria and standards: it can act as a focal point for stakeholder engagement and empowerment. During evaluation design, rubric development leads to a shared understanding about what matters in the program and context. Once developed, rubrics support clarity about what evidence is needed – including rationale for the use of qualitative evidence and mixed methods where appropriate. Rubrics support appropriate interpretation of evidence, enhancing evaluation validity, ownership and use. At the same time, each of these processes help to build stakeholders’ evaluation skills.
Rad Resources:
- OPM’s approach to assessing Value for Money (King & OPM, 2018)
For more information: FSD Africa’s blog on the new VFM approach
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
You are right on target Julian. Great seeing you at the conference in Australia. Keep up the good work! – David
Many thanks, David. Happy New Year!