Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Balancing Optimism and Concerns for the Use of Generative AI for Evaluation by Jennifer Borland

Greetings, AEA365 readers! Liz DiLuzio here, Lead Curator of the blog. Registration for this year’s conference is officially open, and our local hosts at the Indiana Evaluation Association (IEA) are working with the AEA team to ensure our time in lovely Indianapolis is a fulfilling one. This week’s posts feature the voices of IEA’s members. Happy reading!


Hello, my name is Jennifer Borland. I have been actively working in the field of evaluation for more than 25 years. Aside from the internet itself, I’m hard pressed to think of another technology that will have as great an impact on the field of evaluation as generative AI is likely to have.

Earlier this year I tried ChatGPT for the first time. After having just spent months going back and forth with partners and content experts to come up with a set of survey questions about environmental literacy, I was curious to see how ChatGPT could have saved time in that process. In a few minutes, and with only a few prompts it created a surprisingly similar set of questions. That experience left me both in awe, but also a little fearful about how generative AI could, and likely would, be changing the evaluation field. If you don’t yet think generative AI systems will have an impact on your work it may be time to think again.

Reasons I’m most hopeful for generative AI:

  • Generative AI can help to make a blank page seem a whole lot less daunting. As a thought partner, AI can provide good places to start when it comes to writing a proposal, instrument, or report.  
  • By helping save time with some tasks (e.g., creating code or syntax for running repetitive analyses), generative AI can give us more time to spend with things that require greater thought.
  • Generative AI has the potential to help evaluators hone their skills. As an instructive tool or work-aide, I see the potential of generative AI to provide new in-roads into the field of evaluation for would-be evaluators from a wide range of backgrounds—potentially helping to bring greater diversity to our field.

On the other hand, the things I’m most concerned and/or curious about include the following:

  • Generative AI is a tool that few of us have been formally trained to use, so there’s a need to develop skill in using it effectively–including the skills needed to create effective prompts and the need to develop and hone skills for identifying content that’s true and accurate.
  • Ensuring participant privacy and protection of sensitive data: Generative AI models incorporate the information that is fed into them by users so use of tools like ChatGPT to aid in the synthesis of large sets could be in violation of IRB requirements for protection of human subjects’ data.
  • We need guidelines for making appropriate attributions to contributions from generative AI. There’s some helpful citation info here: How to cite ChatGPT
  • I’m interested in thinking more about how best to engage in conversations with partners about the use of generative AI tools as part of evaluative processes. It’s also necessary to develop clear and effective talking points that summarize the value that human insights can bring to evaluative work.
  • I’m concerned about the possibility of accidentally overlooking or possibly contributing to biases that may be inherent in content produced by AI based on the limited, and often homogenous, sources of information those systems have access to.

I am excited to keep learning more about and experimenting with generative AI tools, but I also welcome opportunities to talk more with other evaluators who may have some of the same questions or concerns as those I’ve listed above. I would also like to extend my thanks to Silva Feretti for all the great points that she’s contributed to this conversation thus far. Silva was a featured presenter at a webinar hosted by the Indiana Evaluation Association earlier this year and is the author of these Rad Resources on the AEA365 Blog:


We’re looking forward to the Evaluation 2023 conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to contribute to AEA365? Review the contribution guidelines and send your draft post to AEA365@eval.org. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.