Hi, I am Liudmila Mikhailova, an evaluation practitioner with 28 years of experience designing, managing, and evaluating U.S. government foreign assistance/diplomacy programs with fieldwork in 25 countries. I also teach two George Washington University’s International Education Program courses: International Program Evaluation and International Program Design.
The last decade has witnessed a significant effort in the U.S. government (USG) towards strengthening requirements for U.S. foreign assistance program evaluation. One example is the Presidential Memo (2021) on scientific integrity and evidence-based policymaking which stated that evidence-building plans and annual evaluation plans of federal agencies “shall include a broad set of methodological approaches for the evidence-based and iterative development and the equitable delivery of policies, programs, and agency operations.”
To comply with USG requirements, non-profit organizations and private evaluation companies apply methodological rigor by designing and aligning evidence-based evaluations with the Guiding Principles for Evaluators and the AEA Public Statement on Cultural Competence In Evaluation. The latter document states: “Evaluations cannot be culture free. Culture shapes the ways in which evaluation questions are conceptualized, which in turn influence what data are collected, how the data will be collected and analyzed, and how data are interpreted.”
Even though evaluators thrive to increasingly focus on culturally responsive evaluations to accurately reflect life experiences of program participants in different socio-cultural contexts, often the importance of culture in program implementation and evaluation is underestimated.
Working with Diverse Cultures
- Use cross-cultural expertise to advocate for cross-cultural sensitivity in international program implementation and evaluation. For example, “losing face” is a big shame in collectivistic, hierarchical cultures, so talking to grantees about what did not work should be done in a culturally appropriate way.
- Educate yourself and others on cross-cultural competence from books and colleagues! We have great diversity in our teams of programmers and evaluators. Sharing knowledge with SMEs and colleagues via webinars or presentations will contribute to overall cross-cultural awareness raising and help properly design and executive culturally responsive evaluations.
- Pilot all evaluation instruments (we all do!) Listening to voices on the ground and learning what constitutes success from “their perspectives” and which outcomes they associate with success is critical. In Ukraine, e.g., during weeklong in-depth interviews and focus groups with selected institutions, we requested PIs and their teams to review survey questions for cultural accuracy. We applied a similar approach during our fieldwork in the Philippines, Indonesia, Mexico City and Uganda. The results were rewarding and allowed for finalizing a culturally responsive survey and yielding great data.
- Be a strong advocate for sharing evaluation results with the funder, colleagues, and other stakeholders. Effective ways include debriefings, presentations, and seminars. Engaging in-country partners and grantees in reviewing evaluation findings helps not only build continuing trust but also lays a long-lasting foundation for sustainable collaboration and engagement.
Some examples of methods I found useful and contributing to culturally responsive and equitable approaches in evaluation practice:
- Mixed methods (Cresswell J.W. & Plano Clark V. L. (2011). Designing and Conducting Mixed Methods Research, Sage) – is a recognized approach in non-experimental design.
- Geert Hofstede’s dimensions of culture helps understand the pace of change in a given culture where change occurs.
- Ricardo Wilson-Grau Outcome Harvesting method is an effective tool to engage direct beneficiaries in formulating outcome descriptions and identifying additional outcomes that clarify the “meaning of change” in complex socio-cultural systems.
- Rick Davis Most Significant Change (MSC) technique is effective because it allows for engaging stakeholders in a discussion what change should be highlighted while analyzing data on impact and outcomes.
- UNDP Discussion Paper: Innovations in Monitoring & Evaluating Results – I found this source useful to see what methods and tools are considered innovative and how they can be used.
I welcome everyone to contribute to the question we ask ourselves: What are best practices, methods, and tools you utilize to ensure culturally responsive and equitable evaluations?
The American Evaluation Association is hosting International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to AEA365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.