Greetings! I’m Lilian Chimuma, a Doctoral student at the University of Denver. I have a background in research methods and a strong interest in the practice and application of Evaluation. I believe cultural competence is central to the practice of evaluation and it varies contextually. I am recently exploring the context and scope of evaluation practice in developing countries.
Evaluations in developing Nations are amenably founded on and informed by Western paradigms. Many of these models reflect particular philosophies specific to the environments and conditions surrounding them, rather than those for the nations in which they are applied. Research and related discussions highlight concerns regarding the practice of evaluation in developing countries, including: cultural, contextual, and political reasons. Considering AEA’s stance on cultural competence, and its role and value in quality evaluation, it is essential to review evaluation practices across nations adopting evaluation paradigms developed in or by evaluators from regions other than their own. Such practices would advance social justice relative to indigenous cultures.
I focus on Africa in this discussion, highlighting some of the issues, and efforts towards the practice of evaluation.
Hot Tips:
The African Evaluation Association (AfrEA): Since its inception, AfrEA has grown and expanded its visibility within and beyond the continent. Among issues discussed by AfrEA members, the practice of evaluation given diverse cultural contexts on the continent stands out. Specifically, factors impacting the practice of evaluation on the continent include:
- Education and education systems reinforce colonial models and approaches to learning thus the teaching and practice of evaluation reflects western ideologies.
- Evaluations are done and sponsored by NGO’s that are in most cases funded and managed by western parties and entities. Tension between authentic evaluation practices and evaluation reflective of specific ideologies threaten legitimacy and credibility.
- Rising to the occasion, AfrEA srives to address concerns about evaluation, what it means, and its practice across Africa….
- The key theme for AfrEA’s 2017 Conference, “the foundation for promoting and advocating AfrEA’s ‘Made in Africa” approach’, reinforces the need to revisit, and redefine the practice of evaluation in Africa.
- Various blog discussions on AfrEA’s site highlight the efforts and contributions towards decontextualizing evaluation in Africa. These include posts by: Steven Gruzd, Charles Dhewa, Janvier Ndagijimana, and Fanie Cloete (March, June, and December, 2016).
- A suggested approach for evaluation in Africa (See “African Thought Leaders Forum on Evaluation and Development, Bellagio, Nov 2012” p. 30-38): An African-rooted evaluation Approach!
Lessons Learned:
- Evaluation is vastly evolving in Africa considering cultural and contextual factors.
- This is promising with implications for more actionable and practical evaluations.
- Support for similar initiatives across other developing nations would advance and promote the growth and practice of evaluation, hence implications for cultural competence.
- Evaluations should respect the culture, and not necessarily adopt evaluation frameworks coming from other cultures. Especially when they may not be appropriate!
The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
You are welcome!
Lilian, this is a wonderful post that should be shared widely.
Your link to the African Thought Leaders Forum on Evaluation and Development reminded me of Nora Bateson’s notion of “warm data”, a category of information based on contextual relational interaction: https://hackernoon.com/warm-data-9f0fcd2a828c . Could warm data be the end product of a relational evaluation?
Thanks,
Chad
Thank you for your response Chad. I hope you have continued to share this to grow the debate in solidifying Evaluation practice for and in Africa. I agree that warm data could be the end product for a relational evaluation, but, I believe we have more learning to do in this area.
Thank you!
Lilian
Thank you for your comment Sophie! Yes, cultural competence is vital when choosing an evaluation and more so when deciding to implement it. I believe thinking about the contexts within which we apply evaluation is key in this case. We have to ensure that our evaluations will actually be useful and meaningful to the users, audience, and most importantly the communities within which we implement them. In other words, cultural contexts and social justice should drive our evaluations among other things. Rather than apply, we should adapt….
Also, it is important that one gets to know the stakeholders and program context well by building relationships and finding ways to make the evaluation (process and findings) as responsive to cultural dynamics as possible.
Finally, a lot of research on this particularly over the last 20 years addresses many of the issues relating to cultural responsiveness and evaluation. I find thinking about this research considering evolving cultural contexts and dynamics and how they shape evaluation is even more intriguing.
To your other question, I am in the research methods and statistics (RMS) program. With regards to evaluation, there are many departments and programs on campus that address evaluation as part of their training. The RMS program, by contrast, focuses on preparing students to work as evaluators and methodologists, meaning that students can join this program to learn the knowledge, skills, and dispositions needed to work as a full-time evaluator, either within an organization or as a consultant.
You can learn more from the following sites. The first one provides general information about the programs offered. You will find information about the RMS program here. The second one is for the current professor of practice in evaluation:
https://morgridge.du.edu/programs/research-methods-and-statistics/
https://morgridge.du.edu/staff-members/thomas-pitts-robyn/
Thank you so much for the reply! I love your point about building relationships with stakeholders as a way to learn about the program context, especially because contexts that seem similar can have nuanced differences that may only emerge through relationship-building. I really appreciate you sharing that info and links for the RMS program at DU — I look forward to learning more! Take care, Lilian!
Thank you for this insightful post! This makes me think about the ways in which evaluation is framed within the United States and the importance of thinking about cultural implications when choosing an evaluation model. I am curious about which program you are in at the University of Denver? I live in Denver and am thinking about grad school and I was curious to see which program at the school is engaging with evaluation.