Greetings! We are Clare Nolan and Sonia Taddy and last year we co-founded Engage R+D to help social sector organizations harness the power of evaluation, strategy, and learning to advance their missions.
As long-time evaluators dedicated to social change, we are keenly aware of the importance of enabling diverse stakeholders to share their truth and openly listen to that of others. But what does it take to create spaces that facilitate the authentic exchange of ideas in the philanthropic sector? Below are three traps that hold back truth-sharing, along with resources and tools to avoid them.
#1: The Accountability Trap. Foundation staff and trustees rightly want to know, “What difference are we making?” While being accountable for results is important, evaluation in philanthropy works best when it is viewed through an organizational learning and effectiveness lens. This enables grantees and foundation staff to be honest about barriers they are encountering and work more effectively together.
- Rad Resource: Grantmakers for Effective Organizations’ Evaluation in Philanthropy may be 10 years old, but it’s still a classic. It lays out the case for using evaluation as a tool for improvement and shows how different foundations put this approach in practice.
#2: The Insularity Trap. Foundation staff often rely on trusted colleagues for ideas and advice. While such networks are helpful, they can also limit access to new ideas and knowledge. According to Janet Camarena of the Foundation Center, “Might there be a way to connect the dots and improve the effectiveness, efficiency, and inclusivity of our networks by changing the way we source, find, and share lessons learned?”
- Rad Resource: We recently partnered with the Foundation Center to publish a Grantcraft Guide to facilitate knowledge-sharing in the social sector. By sharing insights and lessons, foundations can help others and advance their own impact, too.
#3: The Bias Trap. Evaluators spend a lot of time thinking about how to mitigate statistical bias. But according to Chera Reid of the Kresge Foundation, “We cannot ‘outrigor’ our biases, as our research and evaluation designs are developed by people with lived experiences.” We need to think beyond sources of statistical bias and more deeply about the implicit biases we bring to our work, both personally and as a field.
- Rad Resource: Equitable Evaluation is creating an important space for funders and evaluators to reflect on the assumptions and values that underlie current evaluation practice, including how some truths and ways of knowing are privileged over others.
By emphasizing learning, supporting knowledge-sharing, and reflecting on bias, we can better use evaluation as a tool to raise important and challenging truths that are critical to advancing philanthropy’s impact.
The American Evaluation Association is celebrating Nonprofits and Foundations Topical Interest Group (NPFTIG) Week. The contributions all this week to aea365 come from our NPFTIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hi Clare and Sonia,
I am currently taking a course on program evaluation and one area I have been particularly interested in is bias and how this has the potential to impact the findings of an evaluation and/or the response by stakeholders to follow recommendations. I was tasked with the assignment of creating my own program evaluation design. Since the social aspect of evaluation is important to me, I chose a local registered charity and identified questions such as “How effective is this program in helping to build community?” Your post “3 Traps that Hold Back Sharing Truths in Philanthropy” caught my attention as I consider what bias might hinder my evaluation design.
#1: The Accountability Trap
I wonder to what extent foundation staff and trustees would want to be sure they are making a difference (prove their efforts) and would this impact how they report information? Would less than favorable results impact their efforts to continue with this project or will they use these results to make changes that improve the effectiveness?
#2: The Insularity Trap
Would foundation staff and trustees be open to learning from the evaluation findings and consider opening their networks to increase their knowledge pool?
#3: The Bias Trap
Could the bias of stakeholders and evaluators be embraced to support an evaluation that is rooted in a desire to build community? Could bias for social betterment, in fact, be a factor that preserves stakeholder and evaluator accountability?
I appreciate your resources and suggestions shared and look forward to including these thoughts into my program evaluation design.
Hello Cherisse! I apologize as I’m only just seeing this now, which says a lot about these past 15 months. I’d be happy to share resources if still relevant. Please feel free to reach out to me at cnolan@engagerd.com.