Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

AI In Data Analysis: A Framework for Responsibly Incorporating AI into the Analytics Process by Elizabeth DiLuzio

Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.


Author Liz DiLuzio

Happy Sunday! I’m Liz DiLuzio and today I’m writing from my roles as adjunct professor at New York University and freelance data analysis instructor. Over the past year, my colleagues and I have begun incorporating ChatGPT into the data analysis workshops we teach in order to provide a learning environment where those interested in exploring AI’s application in data analytics may do so. Observing my participants as they interact with ChatGPT has highlighted for me the necessity for clear guidelines and structured approaches when integrating AI into our evaluation processes. This blog post presents a structured framework for effectively harnessing AI that is inspired by the principles of data-driven decision-making.

Please note: This blog assumes you have already considered whether AI *can* be responsibly incorporated as a tool in the data analytics process and, as a next step, explores *how*.

Making the Case for this Framework

One of the key benefits of applying a structured framework to AI utilization is that it ensures evaluators remain in the driver’s seat. Because both AI and data-driven approaches rely heavily on structured methodologies to maximize effectiveness and minimize risks, applying the well-established framework for data-driven decision-making to AI utilization can guide evaluators and analysts in maintaining control over these powerful tools. By systematically identifying goals, collecting and analyzing data, critically reviewing results, and taking informed action, we can harness AI’s potential without relinquishing control. To follow are the six steps this framework recommends for any evaluator using AI for their analysis.

Step 1: Identify the Goal, Methods, and Most Relevant Tools

When engaging in a new evaluation project, the first step is to clearly define the research question or evaluation objective. Similarly, when integrating AI, it’s crucial to pinpoint the specific goal or problem you aim to address. This could range from improving participant outcomes to optimizing resource allocation or enhancing program efficiency. For example, an evaluator wanting to understand key themes and sentiments in participant feedback for a community program might choose to use ChatGPT to analyze and summarize feedback data, aiming to identify areas for improvement and success.

Rad Resource

This blog from MERL Tech provides tool recommendations from your fellow evaluators.

Step 2: Collect the Data

Once the goal is set, it is time to collect the data. Whether utilizing a data-driven approach or leaning upon this AI framework as your guide for its utilization, this means gathering the necessary input data such as participant surveys, feedback forms, or program records.

Step 3: Conduct the Analysis

This step is similarly represented in both the data-driven and this AI utilization framework. In both scenarios, the analyst will need to map out an analysis plan that details the analyses and data points that will be used to answer the overarching research questions. It is encouraged that the analyst use their AI platform of choice as one might leverage a junior analyst: share your analysis plan with clear instructions and, once the analysis is complete, be prepared to check the work.

Step 4: Critically Review the Results

This step is exclusive to the AI utilization framework. While it could be folded into Step 3, I believe it is so critical that it should be considered its own distinct step. Here, the analyst scrutinizes the resulting analyses to ensure alignment with evaluation objectives and identify any biases or errors. This may involve spot checking through the comparison of the AI’s analysis with manual analyses to identify discrepancies, refining the model as needed. It is also helpful in this step for the evaluator to begin extracting findings, surprises, and curiosities that arise from a review of the results.

Step 5: Collaborate on Meaning-Making

In a data-driven approach, interpretation of the data is often done collaboratively, ensuring that diverse perspectives are considered. The same principle should apply to analyses that utilize AI. Sharing the results and their critical review with interested parties encourages collaborative decision-making and helps in refining strategies based on collective insights. This can look like the evaluator facilitating a meaning-making session with a group of program staff, participants, and other interested parties. Through a collaborative review process, all parties can refine and learn from the findings, and contribute to planning next steps.

Step 6: Take Action

The final step involves taking informed action based on the insights gained. This might involve implementing new strategies or adjusting existing ones. It may also be determined that additional analyses must be conducted, or the present analysis must be refined in some way.

Integrating AI into program evaluation doesn’t have to be daunting. By applying the tried-and-true principles of data-driven decision-making, evaluators can effectively and confidently incorporate AI into their processes. This structured approach not only maximizes the benefits of AI but also ensures that human oversight and strategic thinking remain at the forefront of technological advancement. As we continue to navigate the complexities of modern evaluation, leveraging AI with a data-driven mindset will undoubtedly pave the way for smarter, more efficient, and more successful outcomes.

Rad Resource


Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.