My name is Adam Kessler, and I work with the Donor Committee for Enterprise Development (DCED). The DCED has developed a monitoring framework called the “DCED Standard for Results Measurement”, which is currently used by over a hundred private sector development projects across five continents. This blog provides some lessons learned on why evaluators need good monitoring systems, and why implementing staff need good evaluators.
My experience working with private sector development programmes has shown me that they can become an evaluator’s worst nightmare. In private sector development, staff attempt to facilitate change in complex market systems, which change quickly and unpredictably for all sorts of reasons. As a consequence, staff often modify their activities and target areas mid-way through implementation, potentially rendering your expensive baseline study useless. Moreover, links between outputs and outcomes (let alone impact) are unpredictable in advance, and hard to untangle after the event.
Lesson learned: If you want to evaluate a complex programme, ensure that it has a good monitoring system. A good private sector development programme relies on continual, relentless experimentation, in order to understand what works in their context. If staff are not collecting and analysing relevant monitoring data, then they’ll just end up with a lot of small projects which seemed like a good idea at the time. Not easy to evaluate. You’re going to need to see the data they used to make their decisions, and make your own judgement about its quality.
Hot Tip: Good evaluation and good monitoring aren’t all that different, after all. Do you want a robust theory of change, critically interrogating assumptions, outlining activities and examining how they interact with the political and social context to produce change? Guess what – programme staff want that too, though they might use shorter words to describe it. Good quality data? Understanding attribution? Useful for both evaluators and practitioners. Although incentives vary (hence the jealously-guarded independence of many evaluators), in effective programmes there should be a shared commitment to learning and improving.
Incredible Conclusion: Monitoring and evaluation are often seen as different disciplines. They shouldn’t be. Evaluators can benefit from a good monitoring system, and implementation staff need evaluation expertise to develop and test their theories of change.
1) I recently co-authored a paper called “Why Evaluations Fail: The Importance of Good Monitoring” which develops this theme further. It uses the example of the DCED Standard for Results Measurement, a results measurement framework in use by over a hundred projects that helps to measure, manage, and report results.
2) For an evaluation methodology that explores the overlap between monitoring and evaluation, see Developmental Evaluation.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.