My name is Adam Kessler, and I work with the Donor Committee for Enterprise Development (DCED). The DCED has developed a monitoring framework called the “DCED Standard for Results Measurement”, which is currently used by over a hundred private sector development projects across five continents. This blog provides some lessons learned on why evaluators need good monitoring systems, and why implementing staff need good evaluators.
My experience working with private sector development programmes has shown me that they can become an evaluator’s worst nightmare. In private sector development, staff attempt to facilitate change in complex market systems, which change quickly and unpredictably for all sorts of reasons. As a consequence, staff often modify their activities and target areas mid-way through implementation, potentially rendering your expensive baseline study useless. Moreover, links between outputs and outcomes (let alone impact) are unpredictable in advance, and hard to untangle after the event.
Lesson learned: If you want to evaluate a complex programme, ensure that it has a good monitoring system. A good private sector development programme relies on continual, relentless experimentation, in order to understand what works in their context. If staff are not collecting and analysing relevant monitoring data, then they’ll just end up with a lot of small projects which seemed like a good idea at the time. Not easy to evaluate. You’re going to need to see the data they used to make their decisions, and make your own judgement about its quality.
Hot Tip: Good evaluation and good monitoring aren’t all that different, after all. Do you want a robust theory of change, critically interrogating assumptions, outlining activities and examining how they interact with the political and social context to produce change? Guess what – programme staff want that too, though they might use shorter words to describe it. Good quality data? Understanding attribution? Useful for both evaluators and practitioners. Although incentives vary (hence the jealously-guarded independence of many evaluators), in effective programmes there should be a shared commitment to learning and improving.
Incredible Conclusion: Monitoring and evaluation are often seen as different disciplines. They shouldn’t be. Evaluators can benefit from a good monitoring system, and implementation staff need evaluation expertise to develop and test their theories of change.
Rad Resources:
1) I recently co-authored a paper called “Why Evaluations Fail: The Importance of Good Monitoring” which develops this theme further. It uses the example of the DCED Standard for Results Measurement, a results measurement framework in use by over a hundred projects that helps to measure, manage, and report results.
2) For an evaluation methodology that explores the overlap between monitoring and evaluation, see Developmental Evaluation.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Thanks for this post Adam, I couldn’t agree with you more. Working in the international development sector, and having spent time on both the funder and fundee side, I have seen both extremes – full on monitoring systems and no evaluation plans and then the polar opposite.
Currently, I am at a social venture which takes a private sector approach to spurring community economic growth. We employ a product development and iterative strategy to rolling out an integrated community development model – looking at quick failure and iteration as a means to get to the most effective, impactful and scalable solutions rapidly. For this, we need real-time monitoring for decision-making as well as longer term (and by longer term, we mean annual) evaluation to determine whether we have the impact to scale. It’s a challenge to balance both the M and the E in our decision-making toward our time-bound mission.
I think a lot can be learned from applying private sector approaches to traditional development on both the operations/program side that extends into Monitoring and Evaluation but I also think that the more private sector minded agencies, such as the DCED as well as the impact investing and social enterprise players can learn a lot from the traditional international development sector – especially when it comes to measuring effectiveness and impact. While I hear the rhetoric in the fora, I don’t see a lot of actionable cross-over. Looking forward to diving into some of your recommended resources.