Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

International and Cross-Cultural (ICCE) TIG Week: Co-creating the AI-mediated Era of Evaluation by Charles Guedenet and Jennifer Villalobos


Hello! We are Charles Guedenet, an MEL Advisor at IREX and PhD student, and Jennifer Villalobos, Professor of Evaluation Practice at Claremont Graduate University. We are both AI pragmatists.

Today’s mainstream AI narratives oscillate between excitement about the possibilities of AI augmenting humans, on one hand, and anxiety over job security and its misuse on the other. One narrative argues that AI will help us be more efficient, productive, and creative. The other warns that AI will ultimately make us lazy, and that the ethical, equity, and methodological challenges will harm those that have the least agency and are the most vulnerable. Evaluators find themselves balancing the exploration of AI’s potential with the need to uphold our standards and principles. However, the “doom” or “boom” dichotomy oversimplifies the issue and positions people as passive bystanders in the AI revolution, getting buffeted by the rapid pace of change, and reacting to changes as they come.

Similar to many applied disciplines, the contemporary evaluation landscape is at a defining moment – some might even call this transition – in which integrating AI tools and methodologies is reshaping evaluation practice. While the call to action has been to either one, keep up or get left behind; or two, put on the brakes and regulate before it’s too late, we propose another call: to engage in co-creating this emerging AI-mediated era. What if we flipped the script: it’s not (only) about humans mitigating the risk of AI harm, but AI mitigating the risk of human harm? How can evaluators both mitigate the risks of AI and also reduce human-induced harms in evaluation, where bias and unethical practices are a real and present risk?

Contemporary evaluators have long espoused our need to be agile, innovative, and responsive to societal changes. The rapid deployment of AI is just another opportunity for us to remain at the forefront of technology that, if properly vetted and strategically used, can ensure our evaluations remain relevant, credible, and effective in shaping decision-making. The AI ship has already sailed, we invite you to step into the flying car era of evaluation innovation; just don’t fall asleep at the wheel!

Hot Tips

  • Check for personal biases and blind spots: Ask an AI Language Learning Models like ChatGPT or Claude to check your work for biases, offensive language, or blind spots. Use it to engage in reflection and reflexivity in your work by asking it to question your assumptions and suggest alternative explanations.
  • Help make evaluation more inclusive: Use AI as an assistant, not a replacement, to ensure your technical reports are more accessible to a broader audience. Use prompts to remove jargon and idioms, to make your charts and graphs accessible to the vision-impaired or translate your text for non-native English speakers. Finally, offer respondents the option to use AI-powered audio or video-to-text applications.
  • AI is more than just ChaptGPT: New AI tools are developed every day, many with low-cost subscriptions and built in safeguards for data protection. Find the tools that work for your practice! 

Rad Resources


The American Evaluation Association is hosting International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to AEA365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

2 thoughts on “International and Cross-Cultural (ICCE) TIG Week: Co-creating the AI-mediated Era of Evaluation by Charles Guedenet and Jennifer Villalobos”

  1. Thanks for these tips! Very helpful. Just a heads up that the linked to the New Directions articles don’t work as they prompt me to log into my University of Michigan account (which I don’t have)

    1. AEA365 contributor, Curated by Elizabeth Grim

      Hi Becky – thank you so much for letting us know the links were not working. We have updated the links to both articles.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.