Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week: Exploring the Potential of NLP in Evaluation: Techniques, Use Cases, and How to Get Started by Juliette Berlin, Amy Shim, and Sarah Bergman


Part 1: Rationale for using NLP for Evaluation 

Hello, we are Juliette Berlin, Amy Shim, and Sarah Bergman, evaluation and data specialists with Deloitte Consulting LLP. With the recent buzz surrounding Generative AI (GenAI), there is increasing interest in how to best leverage emerging technologies to advance evaluation needs. Here, we discuss a fundamental driver of GenAI—Natural Language Processing (NLP). NLP is a method used to help computers understand human language to draw meaningful insights (e.g., key themes, sentiments). As an evaluator, being familiar with potential use cases of NLP can be beneficial for parsing large bodies of text for review and analysis, particularly for programs operating at multiple sites that may have a wealth of complex data. Additionally, NLP can act as a stepping stone to leveraging more complex solutions (e.g., Large Language Models [LLMs], GenAI). Below are five NLP techniques with utility for evaluators and some resources for how to get started with NLP using Python. 

Part 2: Five NLP Techniques and Use Cases 
  1. Text Summarization: A technique that creates short summaries of longer input text. Some models pull out the most important text from the input, while others create new text based on semantic understanding of the input.  
    Hot Tip: Use text summarization to summarize policies, program plans, or research articles
  2. Sentiment Analysis: A text classification techniquethat analyzes input text and classifies the tone as negative, neutral, or positive. 
    Hot Tip: Use sentiment analysis on post-training surveys or customer feedback forms to quickly gauge overall sentiment of participants. 
  3. Keyword Extraction: A process to extract the most relevant words and phrases from text, either based on a list of keywords created by you or identified by a model. 
    Hot Tip: Use keyword extraction techniques to find frequently cited needs in funding requests, key line items in budgets, or commonly occurring activities in programmatic plans
  4. Named Entity Recognition (NER): A technique that scans unstructured text and categorizes words or phrases into pre-defined groups such as names of organizations, locations, persons, etc. 
    Hot Tip: Use NER to compile lists of key stakeholders and partners, to categorize documents,or to find research articles of interest based on identified entities and their relevancy. 
  5. Topic Modeling: A type of model that uses machine learning to group large collections of documents according to discrete topics. 
    Hot Tip: Use topic modeling to pull key topics or themes from applications or open-response forms
Part 3: How to Get Started with NLP using Python

Now that you have an idea of how NLP can benefit your evaluation work, how can you get started?

NLP is typically performed in Python, so prior programming knowledge and understanding how to manipulate data using a library such as pandas is a plus. See the Rad Resources below to help you get started with NLP. 

Rad Resources

Have you tried any of these NLP techniques in your evaluation work? Can you think of ways you could incorporate NLP into your work? Share your thoughts in the comment box below.


The American Evaluation Association is hosting the Cluster, Multi-Site, and Multi-Level Evaluation (CMME) TIG Week. The CMME TIG is encompasses methodologies and tools for designs that address single interventions implemented at multiple sites, multiple interventions implemented at different sites with shared goals, and the qualitative and statistical treatments of data for these designs, including meta-analyses, statistical treatment of nested data, and data reduction of qualitative data. The contributions all this week to AEA365 come from our CMME TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.