Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

ICCE TIG Week: Embedding inclusive accountability by building trust by Jeremy Danz & Dhaval Kothari

Hello everyone. We are Jeremy Danz and Dhaval Kothari, co-founders of The ADMEL Lab, a collaborative focusing on embedding inclusive accountability in M&E frameworks.

Since the start of the COVID-19 pandemic, we have worked on COVID-19 contact tracing programs in Massachusetts and Washington and on corruption and governance programs throughout South Asia, as well as survey and accountability mechanism design for programs in Cambodia and Myanmar.

Regardless of our focus, we always try to consider how organizations and programs maintain accountability to those they claim to serve. Too often, especially in the international development and humanitarian response sectors, organizations often end up primarily accountable to their donors, and perhaps secondarily accountable to other stakeholders within the aid industry itself, as opposed to people who live in areas hosting organizations’ interventions.

Defining the directionality of accountability can be challenging, as terms like “downward accountability” imply a hierarchical relationship, perhaps with inequities, injustices, and shades of neocolonialism baked right in.

So, to evaluate aid sector responses from an accountability perspective, we believe that potential for evaluation and potential for meaningful work rests on the trust built between implementing organizations and the communities they are working to serve. Confirming the existence of functional trust between organizations and communities may not always be possible, so we propose one quick question upon which to base conclusions.

Cool Trick:

Ask this first question: “Do people living in areas where organizations are implementing programs have the power to change programming, or even to reject programming entirely?”

If opposition to programming at the local level is not heard, or if desires for altered programming are not incorporated into planning processes, then organizations cannot be said to be accountable to communities and individuals living in areas where they are implementing programs. When asking this question, evaluators must consider inclusion of vulnerable groups from areas hosting interventions in these accountability mechanisms and decision-making processes. Failure to include the most vulnerable may inadvertently exacerbate pre-existing inequities and power imbalances.

Organizations and evaluators should remember to include members of the following groups, and if evaluators determine that organizations have not included members of the following groups, this should be specifically articulated and explained.

  • Local leaders
    • Women’s groups
    • Ethnic minority groups
    • People with disabilities
    • Subsistence farmers
    • Other vulnerable groups

Our second quick question upon which evaluators can rapidly discern whether organizations are indeed accountable to those people living in areas hosting their interventions involves financial transparency.

Cool Trick:

Then, ask this second question: “Would staff at implementing organizations, either expatriate international staff or ‘national’ staff, be comfortable and willing to share programmatic budgeting information with individuals living in areas hosting development and humanitarian response operations?”

In our careers in both South and Southeast Asia, we have received direct instructions from our managers not to share information about salaries and benefits, per diem calculations, incidental expenses, etc. with both our national colleagues and the people living in areas our programs are attempting to serve. Often, these organizations do have financial transparency obligations to their donors and taxpayers at home. If organizations are unwilling to share these financial details with the people they are ostensibly serving, then their directional accountability is primarily inward, or at least somewhere other than on the ground where they are working.

We believe that if organizations and evaluators include these two simple questions in their work, it should be possible to determine quickly whether implementing organizations and their programming are truly accountable to people living in areas hosting their interventions, regardless of the type of programming or the host country of the intervention.

The American Evaluation Association is celebrating International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to aea365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

2 thoughts on “ICCE TIG Week: Embedding inclusive accountability by building trust by Jeremy Danz & Dhaval Kothari”

  1. Hello Jeremy and Dhaval,

    Thank you for your article. I have always struggled with the accountability mechanisms of international aid organizations and how often their programs appear to have a White/Western saviour complex. Not only that, but there is this bias that the global south doesn’t have the capacity to provide relevant insight nor, as you write, the right to participate in holding these programs accountable through evaluation.

    I appreciate that you recognize in your first questions that affected persons should have the power to influence or end programs targeted at their communities. I would hope or expect that this would lead to greater collaboration in the program design and would likely introduce ideas or approaches that may be more effective at targeting specific populations.

    With respect to your second question, in your experiences working in the South and Southeast Asia, when you were asked to withhold financial information did you find it effected your working relationships? While I can appreciate that particularly salary information can be managers feel may be disruptive if shared with local workers/volunteers, if our intention is to build trust, then as you state full transparency is necessary.

    The last point I would add is that I hope we expand our transparency and accountability to always include the vulnerable populations we are hoping to serve in all programs, the trends you outline I also see in some aid programs offered domestically (Canada, for me).

    Thank you for the article and points to consider.
    nathan

  2. Hi Jermey and Dhaval,

    My name is Arthur Sullivan and I’m a Professional Master in Education student at Queen’s University at Kingston, Ontario, Canada. I am also an elementary school teacher.

    I want to thank you for your thought provoking article regarding embedding accountability through building trust.

    Establishing trust is one of the most important elements of an effective evaluation design. As you say, “organizations often end up primarily accountable to their donors, and perhaps secondarily accountable to other stakeholders within the aid industry itself, as opposed to people who live in areas hosting organizations’ interventions.” The successful Collaborative Evaluator is one who skillfully employs “people skills” and is able to establish trusting relationships with those they are working with to design the program evaluation. (Shahul, 1997, p. 200)

    The two questions you suggest that all evaluators use to, “confirm the existence of functional trust between organizations and communities” are excellent examples of probing questions and I’ve incorporated them into my own Program Evaluation Design.

    Shulha, L., & Cousins, B. (1997). Evaluation use: Theory, research and practice since 1986. Evaluation Practice, 18, 195-208.

Leave a Reply to N Utioh Cancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.