Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Measuring DEI in Our Own Workforce: Lessons from Four Studies Across Two Years by Laura Kim and Brooke Hill

Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.


We are Laura Kim (Senior Consultant at the Canopy Lab) and Brooke Hill (Senior Program Manager at Social Impact). Laura is part of the team that works on Canopy’s Inclusion and Leadership series, which explores the forces that influence who gets to advance in international development and why. Brooke is the technical lead for the BRIDGE survey and co-leads the Equity Incubator, a lab studying equity and inclusion through data. 

“You measure what you treasure” is a common saying we are all familiar with. Over the last several years, in the backdrop of COVID-19 and increased focus on DEI, Social Impact and the Canopy Lab teams decided to turn the evaluator lens onto our own workforce. Social Impact launched BRIDGE to capture industry-level DEI data in US-based organizations, and the Canopy Lab focused on individual-level data to capture the impacts of the pandemic on different groups.

Throughout our studies, we learned and adapted approaches to how we ask questions, analyze responses, and present information on a complex and context-specific topic as DEI. 

Lessons Learned

Here are some lessons learned from our experience:

Inclusion early and throughout:  We can’t be experts on everything, especially when researching topics on identities. Convene a diverse Advisory Council who can enrich your scope, reduce confirmation bias, and highlight blind spots. For both Social Impact and the Canopy Lab, we intentionally created an advisory council that represented the dynamic voices of our sector – individuals who represented small businesses, evaluation expertise, government and donor perspectives, non-profits, DEI expertise, and more. Moreover, for the Canopy Lab, we invited people with lived experience to be our advisors. Through this community, we were able to tailor the questionnaire and analysis to be more accurate and utilization-focused. 

To the best of your knowledge, include all identities, define them, update them: For research survey participants, it’s important that their identities are included as an option. Give due diligence to being comprehensive in your lists of gender, race and ethnicity, and other identity-based categories (look out for BRIDGE resources coming soon). Always give the option to write in an identity. If some terms are less common, include definitions. Lastly, update your terms as you learn more. Even though there may be some challenges in analysis if categories do not precisely match, it’s better to be more representative and accurate than to leave it out. In global settings, it has helped to ask the respondent to self-identify whether they are a member of an underrepresented group (whether racial, ethnicity, gender). The trade-off is that there is a risk of bias/error, but we have opted to trust the respondent in self-identification.

Data is a tool for advocacy: As evaluators, we are not only building evidence. It’s our power, privilege, and imperative to translate data into a way that strategically communicates and advocates for the opportunities we see in the data, and to shine a light on the inequity, exclusion, and challenges. Through our work, we include calls to action to continue to learn, reflect, and act. BRIDGE, for example, provided the evidence base that contributed to the creation of the Coalition for Racial & Ethnic Equity in Development (CREED). 

While our community is getting better and more respectful at asking about gender, race, ethnicity and disability, we still have a ways to go. How do we capture intersectionality? How do we incorporate socioeconomic class? How do we benchmark our findings? As we learn more, we’ll continue to adapt our practices. If you have any hot tips, send them our way!


Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.