Interview Sampling by Beverly Peters

Hello again! I am Beverly Peters, an assistant professor of Measurement and Evaluation at American University. This is the third article in a 5 part series on Using Interviews for Monitoring and Evaluation. In the previous article of this series, we discussed unstructured, semi-structured, and structured interviews, and the circumstances under which an evaluator would use each.

Equally as important as the kind of interview we use is the sampling technique that we employ.  That is, we need to decide whom it is that we will interview.

Who makes a good interview respondent? Merriam and Tisdell (2015) tells us that good respondents are those that understand the project culture and operations, and are able to reflect on these.  At the same time, good key respondents (those we rely on for particular insight) are able to understand the role of the researcher, and offer a perspective on the topic, while expressing opinions, thoughts, and feelings reflectively.

Usually when evaluators conduct interviews, we try to ensure that we are interviewing a range of people with different backgrounds and experiences that might have different opinions on, or experiences with, our topic. This helps to ensure that our data is balanced, and reflects the opinions and experiences of the population under study. However, an evaluator will approach qualitative sampling very differently from quantitative sampling. As qualitative evaluators, we tend not to use statistically representative samples in our work. We also tend not to be overly concerned about the size of the sample population in relation to the larger population. However, we are still very cognizant of the importance of a solid sampling strategy plays in our gathering valid and reliable data to support our work.  

Evaluators use different sampling techniques depending on data collection needs. Three types of sampling that I have used in my work include snowball sampling, theoretical sampling, and purposeful sampling.

One sampling method that qualitative evaluators often use is snowball sampling. In this type of sample, one respondent recommends another, until the sample snowballs to a large number of respondents. Snowball sampling can be a convenient way to get a rich account of the project location and its population. It also helps us to make contacts with potential respondents in populations where we might lack familiarity. However, such a sampling technique runs the risk of only interviewing people with the same social, economic, or professional background, thereby limiting and perhaps even skewing the perceptions and range of data collected.

Sometimes qualitative evaluators employ a method commonly used in ethnography, non-statistical representative, or what some call theoretical, sampling. Such a sampling technique is related to the creation of grounded theory, which is common in ethnography. With this type of sampling, the evaluator aims to interview a range of people with demographic or other characteristics similar to those in the project population. The evaluator pays special attention to ensure that the sample includes people from all walks of life and project experiences. Evaluators might find this technique useful when gathering different opinions and perspectives from a wide range of stakeholders.

A very common sampling technique in qualitative evaluation is purposeful sampling, where evaluators interview someone because they play a particularly important role in the project. The evaluator oftentimes chooses the respondents before research even begins. An evaluator might include project managers, project stakeholders, community leaders, or other community members in the sample population. Using this sampling technique helps the evaluator to gain an understanding of operations on the ground, especially from key personnel involved in the project.

Another consideration is sample size, which relates to what qualitative researchers call the “saturation of categories.” Before and as they conduct interviews, evaluators usually set categories for which they need to collect data, and then they conduct enough interviews to saturate those—where additional interviews do not lead to the identification of new themes or concepts. Hagaman and Wutich (2016) give some guidance on how many interviews is enough to identify new themes and saturate categories. A general guide is that most themes are identified with 10 in depth qualitative interviews; and no new themes are identified after about 20. Depending on how one defines it, the saturation of categories usually takes place within 15-20 interviews, but perhaps up to 40 interviews.

Diagram showing population in center with theoretical, purposeful and snowball outside of it.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.