We are Linda Cabral and Judy Savageau from the University of Massachusetts Medical School’s Center for Health Policy and Research. All of our projects involve some type of data collection, sometimes in the form of a survey. In order to get high quality survey data, you need to ensure that your respondents are interpreting questions in the way you intended. The familiarity and meaning of words may not be the same among all members of your sample. To increase the likelihood of high quality data, most of our evaluation protocols involving surveys include cognitive interviewing (aka ‘think aloud interviewing’ or ‘verbal probing’) as part of the survey design and pretesting process.
Cognitive interviewing, a qualitative approach to collecting quantitative data, enables evaluators to explore the processes by which respondents answer questions and the factors which influence their answers. For surveys, it involves fielding an instrument with a small group of individuals from your target sample population and asking the following types of questions for each item:
- Are you able to answer this question? If not, why not?
- Is this question clear? If not, what suggestions do you have for making it clearer?
- How do you interpret this question? Or, how do you interpret specific words or phrases within a question?
- Do the response options make sense? If not, what suggestions do you have?
- How comfortable are you answering this question?
Cognitive interviewing can reduce respondent burden by removing ambiguity and adding clarity so that when the survey is launched, respondents will have an easier time completing it and give you the information needed for your evaluation.
Lessons Learned
- This technique will likely be new for respondents; their inclination will be to answer the survey question rather than talk about how they think about the question. Some up-front coaching will probably be needed, especially if you’re developing a survey for non-English speaking respondents.
- Cognitive interviewing can be a time consuming activity (and, thus, costly). Consider whether there are certain survey questions that will benefit more than others; e.g., undertaking this testing for simple demographic questions is likely unnecessary.
Hot Tips
- A comprehensive pretesting process includes both cognitive interviewing and pilot testing of the instrument. Whereas the primary goal of cognitive testing is to identify how questions are interpreted and revise questions as needed, pilot testing extends this process by examining length, flow, salience, and ease of the survey’s administration. Pilot testing may detect more concrete problems with the survey overall that may affect responses to specific questions and/or the overall response rate.
Rad Resources: There are numerous resources on cognitive interviewing for survey development including this article that compiles several of them as well as this more comprehensive guide.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hi Linda and Judy,
While exploring this blog I’ve become interested in the concept of “think-alouds” as an evaluation technique, and while digging into this idea a bit deeper I happened upon your post. I’m a master’s student at Queens University, and my interest in this topic was originally spurred by a course I’m currently taking on program evaluation design. However, a lot of my interest in think-alouds comes not from my education setting, but from my professional one. I’m a high school science teacher and as online learning is becoming more and more commonplace amidst the pandemic I see potential in using some variation on cognitive interviews for evaluating my students.
I am much more comfortable and confident with quantitative data than I am with qualitative data, and your description of cognitive interviewing as “a qualitative approach to collecting quantitative data” suggests that this could be a good technique to help me bridge the divide between the two data types in my own practice.
While your applications of cognitive interviewing are in-person (it seems), I’m wondering whether I might be able to use a similar technique in a remote and asynchronous manner. In another post on this blog, Stuart Henderson describes how he does something similar to this (2010). For example, I could assign a physics problem and get my students to audio-record themselves as they work their way through it. Listening to these audio recordings would give me a window into their thought patterns that I haven’t had since our classes were moved from in-school to online. I think that the six sample interview questions that you provided would work just as well as prompts for one of my high school student solving a physics problem as they do for one of the respondents in your surveys.
You point out that one of the downsides of cognitive interviewing is the time required, but I’m hoping that having the students interview themselves and provide me with a recording will speed up the whole process. In-person would be better, but I don’t have that option for now.
Thank-you for your insights on this evaluation technique and for helping me see how quantitative and qualitative data can be used to complement each other.
Reference
Henderson, S. (2010). Stuart Henderson on using screen recording and think alouds in evaluation practice. AEA365. https://aea365.org/blog/stuart-henderson-on-using-screen-recording-and-think-alouds-in-evaluation-practice/