Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

ICCE WEEK: Evaluation in Developing Countries: The Case for Africa by Lilian Chimuma

Greetings! I’m Lilian Chimuma, a Doctoral student at the University of Denver. I have a background in research methods and a strong interest in the practice and application of Evaluation. I believe cultural competence is central to the practice of evaluation and it varies contextually. I am recently exploring the context and scope of evaluation practice in developing countries.

Evaluations in developing Nations are amenably founded on and informed by Western paradigms. Many of these models reflect particular philosophies specific to the environments and conditions surrounding them, rather than those for the nations in which they are applied. Research and related discussions highlight concerns regarding the practice of evaluation in developing countries, including: cultural, contextual, and political reasons. Considering AEA’s stance on cultural competence, and its role and value in quality evaluation, it is essential to review evaluation practices across nations adopting evaluation paradigms developed in or by evaluators from regions other than their own. Such practices would advance social justice relative to indigenous cultures.

I focus on Africa in this discussion, highlighting some of the issues, and efforts towards the practice of evaluation.

Hot Tips:

The African Evaluation Association (AfrEA): Since its inception, AfrEA has grown and expanded its visibility within and beyond the continent. Among issues discussed by AfrEA members, the practice of evaluation given diverse cultural contexts on the continent stands out. Specifically, factors impacting the practice of evaluation on the continent include:

Lessons Learned:

  • Evaluation is vastly evolving in Africa considering cultural and contextual factors.
    • This is promising with implications for more actionable and practical evaluations.
    • Support for similar initiatives across other developing nations would advance and promote the growth and practice of evaluation, hence implications for cultural competence.
  • Evaluations should respect the culture, and not necessarily adopt evaluation frameworks coming from other cultures. Especially when they may not be appropriate!

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

6 thoughts on “ICCE WEEK: Evaluation in Developing Countries: The Case for Africa by Lilian Chimuma”

  1. Lilian, this is a wonderful post that should be shared widely.

    Your link to the African Thought Leaders Forum on Evaluation and Development reminded me of Nora Bateson’s notion of “warm data”, a category of information based on contextual relational interaction: https://hackernoon.com/warm-data-9f0fcd2a828c . Could warm data be the end product of a relational evaluation?

    Thanks,
    Chad

    1. Thank you for your response Chad. I hope you have continued to share this to grow the debate in solidifying Evaluation practice for and in Africa. I agree that warm data could be the end product for a relational evaluation, but, I believe we have more learning to do in this area.
      Thank you!
      Lilian

  2. Thank you for your comment Sophie! Yes, cultural competence is vital when choosing an evaluation and more so when deciding to implement it. I believe thinking about the contexts within which we apply evaluation is key in this case. We have to ensure that our evaluations will actually be useful and meaningful to the users, audience, and most importantly the communities within which we implement them. In other words, cultural contexts and social justice should drive our evaluations among other things. Rather than apply, we should adapt….
    Also, it is important that one gets to know the stakeholders and program context well by building relationships and finding ways to make the evaluation (process and findings) as responsive to cultural dynamics as possible.
    Finally, a lot of research on this particularly over the last 20 years addresses many of the issues relating to cultural responsiveness and evaluation. I find thinking about this research considering evolving cultural contexts and dynamics and how they shape evaluation is even more intriguing.
    To your other question, I am in the research methods and statistics (RMS) program. With regards to evaluation, there are many departments and programs on campus that address evaluation as part of their training. The RMS program, by contrast, focuses on preparing students to work as evaluators and methodologists, meaning that students can join this program to learn the knowledge, skills, and dispositions needed to work as a full-time evaluator, either within an organization or as a consultant.
    You can learn more from the following sites. The first one provides general information about the programs offered. You will find information about the RMS program here. The second one is for the current professor of practice in evaluation:
    https://morgridge.du.edu/programs/research-methods-and-statistics/
    https://morgridge.du.edu/staff-members/thomas-pitts-robyn/

    1. Thank you so much for the reply! I love your point about building relationships with stakeholders as a way to learn about the program context, especially because contexts that seem similar can have nuanced differences that may only emerge through relationship-building. I really appreciate you sharing that info and links for the RMS program at DU — I look forward to learning more! Take care, Lilian!

  3. Thank you for this insightful post! This makes me think about the ways in which evaluation is framed within the United States and the importance of thinking about cultural implications when choosing an evaluation model. I am curious about which program you are in at the University of Denver? I live in Denver and am thinking about grad school and I was curious to see which program at the school is engaging with evaluation.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.