I’m Patrick Koeppl, cultural anthropologist, mixed methods scientist, Halloween enthusiast and Managing Director at Deloitte Consulting LLP. Throughout my career, I have found mixed methods are often the leading way to conduct broad evaluation of complex systems and situations. Qualitative approaches like in-depth interviews, focus groups, participant observation, policy reviews, and many others have a place in developing understanding. Determining the validity and reliability of qualitative data collected via mixed methods poses both challenges and opportunities for authentic understanding of complex systems and phenomenon.
Lesson Learned: The science of numbers, statistics, randomized samples and double-blind studies may indeed be described as “hard,” but qualitative approaches are not “soft.” Rather, they are “difficult.”
Practitioners of the “soft sciences” often face criticisms that their endeavors are not scientific. Nay-sayers may claim that qualitative research is somehow illegitimate—and too often anthropologists, sociologists and others hide in the dark, brooding corners of the application of their craft, frustrated that their methods, approaches and findings may not be taken seriously by the “real scientists” who frame the discussion. Qualitative evaluators fall into this trap at their own peril—there is nothing inherently unscientific about qualitative methods and the findings and inferences drawn from qualitative data.
Hot Tip: It is the practitioner, the scientist, who should bring rigor and science to qualitative methods. Set up your approach with rigor by asking yourself:
- Are the evaluation questions clear?
- Is the evaluation design congruent with the evaluation questions?
- How well do findings show meaningful parallelism across data sources?
- Did coding checks show agreement across interviewers and coders?
- Do the conclusions ring true, make sense, and seem convincing to the reader?
Lesson Learned: Qualitative data are the source of well grounded, richly descriptive insights and explanations of complex events and occurrences in local contexts. They often lead to serendipitous findings and launch new theoretical integrations. When reached properly, findings from qualitative data have a quality of authenticity and undeniability (what Stephen Colbert calls “truthiness”).
Hot Tip: Establish scientific rigor to determine reliability and validity in the following ways:
- Use computer assisted data analysis tools such as ATLAS.ti or NVivo for data analysis
- Develop a codebook and data collection protocols to improve consistency and dependability
- Engage in triangulation with complementary methods and data sources to draw converging conclusions
Finally, putting qualitative data results into the context of a story and narrative to convey a concrete, vivid and meaningful result is convincing and compelling to evaluators, policy makers, and practitioners. Such questions and tools warrant the scientific use of qualitative data collection and analysis in the quest for “useful” evaluation.
The American Evaluation Association is Deloitte Consulting LLP’s Program Evaluation Center of Excellence (PE CoE) week. The contributions all this week to aea365 come from PE CoE team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.