Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Tech TIG Week: Hot Tips: Using ChatGPT to Transcribe and Summarize In-person Meetings by Noah Goodman

Hi, I am Noah Goodman, an evaluator and researcher at Education Development Center. Recently, one of my projects held a planning meeting with school leaders to help them reflect on results from a youth health and risk behavior survey we administered in their districts. During that meeting, participants used post-its to reflect on the data, possible action steps, and areas of technical assistance they would like, and we wanted to summarize that conversation for future planning. In today’s post, I’m sharing how I used ChatGPT 4 to streamline that summary process.

Hot Tips

I’ll describe two ways I used ChatGPT during this summary process, to:

  1. Transcribe the handwritten post-it notes
  2. Take a first pass at categorizing participants’ responses

Transcribing the handwritten notes. I told ChatGPT I had pictures of post-it notes that I wanted to transcribe and then I uploaded the pictures. I checked all the results and found only a few minor errors, mainly in places where I also struggled to understand what was written. Two reflections:

  • Getting the formatting correct. In order for the ChatGPT output to be correctly formatted when pasted into Google Docs (i.e. bullet points and bolded text displaying properly), I had to highlight and copy the results in ChatGPT rather than click on the copy button that appears at the bottom of your results.
  • Grouping responses. In some of the pictures I uploaded, different color post-its identified different questions. In others, the post-its were grouped spatially based on the question. Asking ChatGPT to group results by color or by column helped speed the clean-up process.
  • Categorizing the post-its. In order to organize the post-its into thematic categories, I pasted the transcribed post-its into the prompt, listed the meeting’s discussion questions, and asked ChatGPT to summarize participants’ responses.

Categorizing the post-its. In order to organize the post-its into thematic categories, I pasted the transcribed post-its into the prompt, listed the meeting’s discussion questions, and asked ChatGPT to summarize participants’ responses.

  • Creating starter categories. I wanted to use ChatGPT to create broad thematic categories that I could then refine, so I asked ChatGPT to include all of the transcribed notes in the categories it created. Without this extra stipulation, it returned categories with only a few example notes underneath each, which made it hard for me to refine the categories and to ensure categories aligned with the data.
  • Ensuring all the data was included. While I asked ChatGPT to include all of the transcribed notes in the categories it created, when I double checked the results, ChatGPT did not include 66 of the 151 transcribed notes.
  • Asking for alternative categorizations. After getting an initial set of categories, I asked ChatGPT if it could suggest alternative ways to categorize the notes and it returned 8 different ways: by topic area, action required, stakeholder group, type of data/trend, question focus, programmatic area, data use, and by demographic focus. Finally, I asked which approach to grouping the notes it thought would be best, and it suggested a hybrid approach of findings by topic and related needs and actions. This ability to suggest alternative ways to group data seems particularly valuable for evaluators looking to use ChatGPT as a thought partner.

ChatGPT saved me time in transcribing and the initial categorizing, and its ability to suggest different ways of grouping the notes was insightful. However, the fact that it did not include all the transcribed notes when it categorized them highlights some challenges of using generative AI tools for data analysis. Evaluators need to develop their understanding of how these tools work and what uses are likely to be appropriate for our evaluations.


The American Evaluation Association is hosting Integrating Technology into Evaluation TIG Week with our colleagues in the Integrating Technology into Evaluation Topical Interest Group. The contributions all this week to AEA365 come from ITE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.