This is the opening statement programmers learn in introductory computer science classes. Soon enough, it may be the one that all evaluators know by heart.
I am Jason Block, an AI (artificial intelligence) researcher and experimenter, who helps companies define, measure, and tell the story of their impact. As a creative New Yorker, I’m also working on the first-ever musicAI™ advance LGBTQIA+ rights & Equality Act.
We have all heard the buzz about AI, natural language processing (NLP), and other digital products, which are positioned to change the world. You may have seen recent images from DALLE, remember how Tableau acquired Narrative Science, a platform for automated data storytelling, or heard how AI won a local art fair. But, what are the implications for the field of evaluation? What are the risks and opportunities when these new actors emerge?
In this blog post, I explore the potential implications of these new actors, which often complement the world of finance. Algorithmic technologies are shadows in our lives. In its complicated brokenness, I hope we evaluators uplift differing perspectives and beauty.
There are a lot of unknowns when it comes to these new technologies, which can be both exciting and daunting. AI has the potential to change how we work and optimize how we develop insights from enormous datasets. For example, tools like MonkeyLearn can classify language in minutes. Copy.Ai can generate content with a few clicks. Bedrock.AI examines hundreds of publicly available records to assess financial and ESG risks. AI tools are simultaneously complementing and replacing human tasks.
Evaluators are called upon to be the voice of reason in the face of these changes. It is why we use instruments like theories of change or Social Return on Investment to discuss the impact of initiatives. We help stakeholders make sense of the world. We guide resource allocation, presenting multiple perspectives. Our responsibility is to be critical. We have to question assumptions, power, and the status quo.
Given that, it’s important to examine a few risks of AI for evaluation, highlighting the adverse impacts on LGBTQIA+ communities in particular. I believe the same framework can be applied to other marginalized groups and evaluation efforts. Three significant risks are binary thinking, bias, and blurring.
- Binary Thinking:
- Models are built on binary thinking, which can be harmful to expression. For example, many survey softwares only recognize two genders (male and female) or require people to select one sexual orientation (LGBTQIA+). These tools forget to account for the complexities of humanity in their creation. Evaluators can press forward by building resources like More Than Numbers, a guide to DEI data collection, or engaging in participatory evaluation practices. Or critically examining impact tools like Sustainable Development Goals, which don’t mention LGBTQIA+ folks?
- AI technology can be biased against LGBTQIA+ people. For example, if a computer program is only trained on data that reflects heterosexual, cisgender people, it will likely contain historical bias against LGBTQIA people.
- AI technologies consistently censor LGBTQIA+ -related content under the guise of ethics policies. For example, social media platforms leverage AI to automatically flag and remove posts containing keywords or phrases that are associated with LGBTQIA+ communities (i.e. banning “intersex” as non-age appropriate content). This erases people’s identities from the mainstream, blurring the truth of their lived experiences.
We all have the responsibility to design a better world. When new technologies, new stakeholders, and new identities emerge, we owe it to each other to proudly say, Hello World! We welcome your nuances with critical minds and open hearts.
Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.