My name is David Stanton Robinson, EdD. As the President of DSRobinson & Associates, I have over 35 years of expertise in consulting and evaluation in the mental health, early childhood education, healthcare, and higher education sectors. I’ve collaborated with Rhode Island public health groups, and I’m a member of the Evaluation Network of Rhode Island.
I’ve been using AI to increase equity in my evaluation and consulting. AI has revolutionized my workflow, so let me share my journey with you.
At the beginning of each project, it’s essential to accurately understand the problem. AI has been helpful in synthesizing the different perspectives of diverse community members and partners. I worked on a project focusing on substance use disorders and behavioral health emergencies in a rural community. AI made the equity issues much more transparent (ChatGPT) and has been useful for editing the vision and mission statements, acting as an editor to ensure accuracy and readability (Lex-page). It summarizes worksheet data, leaving more time for explaining results to diverse communities (Numerous.ai).
AI has also been invaluable as a research assistant for the YMCA to incorporate substance abuse prevention and mental health education into their youth programs (ChatGPT) and edited our reports (Canva, Scalenut, ResearchRabbit, Scholarcy) by simplifying them so that a 5th grader can understand (Guidde for videos). AI played a pivotal role in enhancing our surveys and interviews, recommending more precise language and additional questions to engage underserved groups (ChatGPT), as well as broadening our data collection approaches for patients from diverse backgrounds (ResearchRabbit, Scholarcy).
AI has revolutionized my work, making it more efficient and effective at increasing equity, one small step at a time. One must always be aware of the potential for AI search results to incorporate biases or be untruthful based on the training data, which tends to be biased. Carefully review all results from AI tools (@MushtaqBilalPhD, ChatGPT). AI applications are best used as collaborator-research assistants.
Lessons Learned
By leveraging AI in my evaluation and consulting work, I have experienced a transformative shift in my workflow that has significantly contributed to increasing equity. AI has proven invaluable at the beginning of each project, helping me synthesize multiple perspectives and gain a clearer understanding of complex issues. It has streamlined various tasks, such as creating project vision and mission statements, developing standard operating procedures, and designing intervention strategies. AI has also facilitated research efforts in areas like substance abuse prevention and mental health education, and it has enhanced the clarity and accessibility of our reports and assessments. Furthermore, AI has played a crucial role in diversifying our data and ensuring that our surveys and interviews are comprehensive and inclusive. I always review the results of AI searches and AI suggestions for bias and credibility. Overall, the integration of AI has revolutionized my work, making it more efficient and effective in promoting equity.
Rad Resources
Conducting Equitable Evaluations by Katrina Bledsoe and Rucha Londhe
How we incorporate diversity and inclusion in evaluation by Eyerusalem Tessara
Follow Mushtaq Bilal on X (formerly Twitter) here for sound advice on using AI for writing, revising and editing drafts of blogs and reports
AI links to aid equity in evaluation: ChatGPT for ideas, reviewing drafts, outlines. Bing and Bard are great too
Note-taking for meetings and saving websites and ideas
Otter and Fellow for transcribing meetings, meeting note-taking, and summarizing transcripts
The role of AI in Diversity, Equity and Inclusion
The American Evaluation Association is hosting Evaluation Network of Rhode Island (ENRI) Week. The contributions all this week to AEA365 come from ENRI Affiliate members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.
I will start….”This Excel data file is equally distributed across the major racial, language, age, and ethnic groups in this geographic area.” Could be a statement from an AI summarizer. Before adopting the statement as fact, I would always check by analyzing demographic data in my database to see if the summary is truthful. Around equity issues, I would instruct the AI tool to provide the source of the data. This is now possible with the “special instructions” feature of ChatGPT.
I would love to understand how, exactly, one checks AI outputs for bias, and how often this happens (and ought to happen). I see the “check for bias” step mentioned in articles about using AI tools, but haven’t seen any detail about how one goes about it.
I mention the “ought to happen” above because human checks on AI outputs strike me as similar to human checks on autonomous machinery performance – when the technology usually completes its task well, human operators become less attentive over time, due to the (often subconscious) assumption that the tech’s success pattern will continue. (Monitoring and QCing is boring and tiring!) But written information AI success requires careful discernment, so the need for QC is less obvious: while a “self-driving car” speeding toward a stop sign obviously requires intervention, AI writing outputs can look/sound confident even when horribly off-base.
In other words, people who use AI tools likely need to actively combat the tendency, over time, to increasingly assume AI outputs are acceptable (i.e. accurate, unbiased) and don’t require QC.
Of course, the risk is dependent on the use case. For meeting summaries, it’s easy to spot-check an AI tool’s performance. When we start talking about equity, we ought to be scrutinous.
Thank you for your comment. Good questions raised here. But first, can you give me an example of an AI written statement or advice that hides bias? Perhaps we can leverage useful examples to work through your concerns.