Greetings from beautiful New Hampshire! I am Rosie Emerson—a Kenyan external evaluator of international development programs (mostly in Sub-Saharan Africa). I have led over 50 evaluation projects and in this post, I would like to share some overarching lessons from my experiences:
- Programs should stop trying to “boil an ocean”: Too often, development programs have an unworkably large scope that is not commensurate with available resources and the time within which results are expected. Programs need to focus on a few critical areas, channel all efforts to delivering on those and set realistic timelines for change.
- Programs should learn quickly and often: There is inadequate learning from monitoring and evaluation. Adaptive management practices are beginning to gain a foothold in the development world but these practices are not nearly as entrenched as they should be. Undue focus is placed on elaborate evaluations with lengthy reports. Rather, programs should strive to learn quickly and often. They should tolerate small evaluations to gather directional insights to ensure they are moving in the right direction.
- Extractive evaluations: Some evaluations are highly (and unnecessarily) extractive with lengthy instruments and extremely large sample sizes. Evaluators need to push back on requirements that are not necessary for the integrity of the evaluation and that pose undue burdens on already disadvantaged groups.
- Programs should be designed in collaboration with local stakeholders: Programs are often designed by technocrats who are far removed from the realities of the people who are to benefit from the program. Some programs I’ve evaluated are so disconnected from the realities on the ground that failure is inevitable. Programs should be held accountable for designing programs in close collaboration with stakeholders. Funders should allocate a percentage of the budget to stakeholder engagements prior to program inception.
- Prescriptive evaluations and program designs: international development programs and their related evaluations need to be a lot more open-ended to allow for responsiveness. These contexts are highly unpredictable. Programs and evaluations that dictate every aspect leave little room for adaptability and responsiveness.
- Underutilization (or misapplication) of qualitative research methods: There is increasing appreciation of qualitative research methods in international development evaluations. However, some evaluators are not trained in qualitative research methods and believe that sprinkling quotes here and there is sufficient. In other instances, evaluators work with local researchers who are not experienced interviewers and therefore fail to gather useful insights. Sometimes clients insist on extremely large sample sizes for qualitative work, which makes it impossible to conduct rigorous analysis within a short time. As evaluators, we need to do our part in increasing the quality and usefulness of qualitative research. We should train ourselves up if we are lacking the skills and should use our license as evaluators to educate clients on appropriate sample sizes that will allow for rigorous analysis.
What lessons have you learned that you would like to share with program designers and evaluators?
The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.