Hi! We’re the team behind a sentiment analysis study of EvaluATE’s open-ended survey responses. Larry is a professor of engineering management at WMU, and Nolen is a former Fortune 500 data scientist consulting on data analytics projects.
EvaluATE hosts webinars and workshops about evaluation topics, and attendees submit feedback in the form of open- and close-ended survey responses. How can that data be used to its fullest?
It’s a question many organizations face. The usual descriptive statistics and narrative summaries of open-ended survey responses have their place. But do you really want to sift through thousands of comments to find that nugget? We don’t think so. And, if you’re working with a large number of survey responses, you’re probably missing opportunities to identify trends and glean information for ongoing improvement. That’s where this research-on-evaluation study comes in.
Data analytics uses rules to analyze open text. With it, we can understand a whole data set without reading, analyzing, categorizing, and scoring every comment.
Now, where do those rules come from? Is there some “black box” of rules we can find on the internet? Well, there is, but we may not want to trust it. Consider this: words like “metal” and “plastic” are probably pretty good to find in a review of a new car, but what if you found them in a review of a new snack food? Yuck! Scary! Dangerous!
So we used a subset of the survey response data to build and validate a set of custom rules. By analyzing the whole data set using those rules, we were able to answer questions about the trainings and about the survey itself: What do participants learn in the events? How do they perceive the events’ value and impact? What types of survey questions are best asked in an open-ended (vs. close-ended) format? Which survey questions are most effective?
Want to know more? We’re working on an article for publication. For now, here are a few ways to see what we did and find out more about data analytics.
Rad Resources
- Video series for the steps in our approach. We created an image file showing the steps in our process, with clickable links to video summaries for each step.
- Viz. We developedthis interactive data visualization tool as an example of a way to visualize unstructured text.
To learn more or to engage the WMU Evaluation Center on your unstructured data projects, contact Megan Zelinsky with EvaluATE.
The American Evaluation Association is hosting Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to AEA365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.