CLEAR Week: Nidhi Khattri on Building systems of evaluations, not one evaluation at a time

I’m Nidhi Khattri from the CLEAR Global Hub at the World Bank’s Independent Evaluation Group.  As a member of the team that got CLEAR up and running, I’ve been interested in how countries make evidence-based decisions, and the role that evaluation plays in that process.

Coming from a research-based background where I was more concerned with producing evaluations, I began reading more about how evaluations can be used systematically.  This book on how governments use evaluations to inform budget decisions was especially informative.

My understanding grew about the ecology around the production and use of evaluation that is grounded in public sector (or indeed organizational) management.  For evaluation evidence to be used, it’s not enough for the evaluation to be technically sound.  It must be timed correctly and connected closely to the different points of decision in the policy cycle – policy design/budget allocation, program design, implementation, review, and back to budget allocation (both within and across sectors and programs) – and to the fundamental questions that policymakers (and program implementers) must contend with at those specific points in the cycle.   Furthermore, the set of evaluations an organization or a government conducts or commissions must also be based on principles of effective and efficient use of resources, helping guide the choice of evaluations.

Many countries (and large organizations) have developed different institutional mechanisms and arrangements to deal with these issues.  They attempt to address the different points in the policy cycle, but they close the loop only partially.  Some focus predominantly on budget decisions.  Others are far more robust in considering and solving implementation issues.  Still others focus much more on accountability at the end.  In part it’s because of issues of coordination and capacity across the range of ministries and departments.  It’s also is due to differences in management philosophy and use of monitoring rather than evaluation.  Similarly, there are quite a few differences in governments (and organizations) making rational decisions regarding the set of evaluations, ranging from somewhat formulaic approaches to letting “…a thousand flowers bloom.”

This subject – use of evaluation as a tool for public sector management – intrigues me, and I wonder how it’ll evolve in the future with greater access to technology and multiple sources of information, collected and analyzed on an ongoing basis by non-evaluators.  Will it be tied less to decisions at specific points in time and far more to real time decision-making? Will evaluations become less “evaluative” and more “facilitative” along the entire cycle?  In which case the question regarding the choice of a set of evaluations may become moot.

Rad Resources:

World Bank Independent Evaluation Group’s case study series on M&E systems

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.


Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.