Hi I’m Carolyn Fonseca, PhD, Technical Director at Management Systems International (MSI) TetraTech. I have experience conducting evaluations for clients such as USAID, managing third-party monitoring platforms, and conducting special studies/research activities testing new technologies and methods.
I, like many of you, am often under time, budget, and data constraints when carrying out work for my clients. What if we could have both quality in our analysis while also doing this activity much faster? My clients like USAID or Millenium Challenge Corporation (MCC) can be flexible on either the time or budget but rarely both. Therefore, when collecting qualitative data through interviews and focus groups we balance the need for speed with the need for quality.
Lessons Learned
My first test of AI in qualitative analysis was on a large scale household study for a USAID project in South Sudan. We were tasked with learning about households’ views on resilience, capturing data on various types of shocks as well as family strategies to manage these events. We were given four weeks to analyze more than 80 interviews and 45 focus groups. The client had a budget for a larger team, but they could not alter the deadline. Even with a large team, the timeline for the task was challenging. Thus, my team and I asked the client to allow us to test the use of AI in a portion of the analytical process.
We coded the data as we have traditionally done by reviewing each line of text and coding these into themes using a coding scheme. Then, we would summarize and synthesize across the coded sections, capturing respondents’ groups views. Various experts suggest summarizing coded text is an optimal place for using AI. We took the coded lines and asked ChatGPT to summarize across specific categories we fed it.
This approach allowed us to still connect directly to the data, read the nuances in the text, while speeding up the analytical process. We conducted quality checks, comparing summaries we created from the coded text to those provided by ChatGPT. My team and I were surprised at how the AI summaries mimicked our own and even found additional insights in the data we had missed.
I leave this blog with one final thought: AI, like many other tools before, can be powerful and augment our work. Like any tool, I must consider the parameters of my activity and determine which tool is appropriate, I must mitigate risk to my respondents, and I need to work to capture the highest quality data. Where you end up using AI may be a reflection of your activity parameters, as it was in our case. In the end it may end up strengthening your evaluation or be detrimental to it.
Rad Resource
We presented this study to ITE TIG members in September 2023. You can find the slides on the TIG webinar page: https://comm.eval.org/techtig/events/tig-webinars
The American Evaluation Association is hosting Integrating Technology into Evaluation TIG Week with our colleagues in the Integrating Technology into Evaluation Topical Interest Group. The contributions all this week to AEA365 come from ITE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.