Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Theories of Evaluation TIG Week: Corpus Linguistics: What’s All This Talk About Evaluation? by Aaron Kates

Hi, my name is Aaron Kates, I am a recent graduate of the Interdisciplinary PhD in Evaluation Program at Western Michigan University. I live in South Bend, Indiana and work as a consultant with EffectX, an independent evaluation firm.  Today I will be sharing a few resources I found about a topic I learned about recently: corpus linguistics.  It is a fascinating tool that you can use to learn about the ways words (linguistics) are used in and across bodies of knowledge (corpora) and within a single area (corpus).  The corpus linguistics I will be discussing today is specifically focused on evaluation theory, but the process can be applied to many other topics in evaluation scholarship and practice. 

Hot Tip:

Corpus linguistics is the study of the linguistic features of large bodies of text.  The approach emerged in the early 1990s because of the gains in both computing power, their ability to develop and access databases, and programmers’ developing ability to use computers to generate word counts and associations across massive bodies of literature. This means that the tool can summarize across massive amounts of information and bodies of knowledge to help us understand the main topics and themes of the corpus, as well as the topics that make a corpus unique in relation to other, similar fields.  For example, one could use this tool to discern differences in the body of knowledge related to evaluation practices in a field (e.g., education) compared with others (e.g., public health), and illustrate how the discussions are similar and different across both. 

To understand how the field of evaluation uses words, I built a corpus of fourteen evaluation-focused academic journals by downloading all articles for the year 2019. I uploaded this collection of over 400 articles to a corpus linguistics platform called Sketch Engine to generate lists of the top keywords used in the evaluation literature, which gives a sense of dominant topics in the evaluation literature. To give a sense of this, five of the top keywords were “evaluator,” “program,” “stakeholder,” “data,” and “learning.”

Hot Tip:

The corpus linguistics tool suggested that compared with other fields, the language we use to describe evaluation is very diverse.  I took it to support the idea that we are a transdiscipline, but it could also point to some peoples’ understanding of evaluation as a field, others’ understanding of it as a tool, and yet others’ understanding of it as a group of values for practice.  I wonder if this is illustrative of the challenges we sometimes encounter when talking about evaluation with people that do not understand it the same way we do.

Cool Trick:

How could this type of inquiry be useful for evaluators? I imagine scenarios in which an evaluator has access to a broad array of focus group data, emails, or programmatic documents and the evaluation questions could be addressed by systematically analyzing major themes and ideas across these data sources. Such analysis could be used to identify common themes of discussion, flag potential lines of inquiry, or even anticipate points of disagreement or possible misunderstanding. This means that it could be a valuable tool on the front-end when scoping an evaluation, or during the analysis phase when looking for common themes.

Rad Resources: 

  • Sketch engine is a low-cost and easy-to-use web-based tool for corpus linguistics. Generate a corpus from your documents, and you can have results in seconds!
  • Wordsmith 8 is a desktop platform that allows more control and customization than Sketch engine, but is a bit more difficult to master. It is well-supported and updated frequently by the developer and linguist, Mike Scott.
  • Shameless plug for My dissertation: A Discipline in Search of a Voice: A Corpus Linguistic Study of Evaluation Scholarly Literature

The American Evaluation Association is celebrating Theories of Evaluation (ToE) TIG week. The contributions all this week to AEA365 come from AEA’s ToE TIG. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.