Hi, my name is Lacie Barber and I am the Statistics Counselor at Smith College in the Spinelli Center for Quantitative Learning. I have a Ph.D. in Industrial-Organizational Psychology and a minor in Research Methodology.
Many researchers and practitioners work hard to develop and/or select psychometrically sound quantitative measures for their projects. Unfortunately, less attention is paid to monitoring and evaluating the quality of the data before conducting statistical analyses. This is rooted in several questionable assumptions: That all respondents completed the survey in a distraction-free environment, read and understood all instructions and items, and were not motivated to complete the survey quickly due to time constraints, incentives, or lack of enthusiasm for the project. By incorporating various quality control checks into your survey, you can create a priori rules for dropping certain cases from analysis that may be obscuring data trends and significant effects.
Hot Tip: Quality control checks are especially important for online surveys that are completed anonymously and/or have an incentive for completion. There is a higher propensity for individuals to haphazardly complete surveys when there is an incentive involved.
Cool Trick: In an online survey, outline important factors associated with data integrity (e.g., setting, careful reading) and give them the option to return later to complete the study or not to participate. For program evaluations, remind participants why their responses are valuable and how you will use the results to improve the program and/or organization.
Cool Trick: Early in an online survey, create a disqualification item based on following instructions. See below for an example:
Cool Trick: Create several quality control items embedded within a long list of items containing similar response options. Missing multiple items provides rationale for dropping a certain respondent’s scores. See below for an example:
Cool Trick: Pick an online survey platform that allows you to record completion time. Isolate completion times two or three standard deviations below the mean to see if they differ significantly from the other respondents. The reliability of measures may also be lower in these participants compared to the rest.
Cool Trick: Repeat a question later in the survey that was asked earlier. For questions with Likert scales, you can create a “consistency score” by subtracting one item from the other. Responses farther from the mean may indicate random or careless responding.
Hot Tip: Cross-tabulate responses on two quality control checks to evaluate grounds for case deletion. I have dropped responses from my analyses that fell more than three standard deviations below the mean on time AND had inconsistency scores beyond three standard deviations from the mean, because I found a significant association between shorter completion time and consistency in my preliminary analyses.