Hello all, My name is Justin Long and I am a 4th year M.S./PhD student at the University of North Carolina at Greensboro. I was always told to use methods and the evaluation theory that fit the context. If an evaluation is better served through a participatory evaluation then don’t force an deliberative-democratic evaluation approach. In some of my early work I found this out the hard way. It’s not just about the methods we use, it’s how we justify those methods. All evaluators come into the field with personal bias of what knowledge is and how it is created or discovered.
This is called our epistemological/ ontological view or framework and everyone has a framework whether they’re directly aware of it or not. There are two main camps of epistemologies I’d like to talk about. Don’t panic. First, knowledge exists in the world and we attempt to estimate it but we can never see it; that’s the post-positivist perspective. Second, knowledge is a human construction in which a researcher seeks to understand that construction; this is called constructivism. For practitioners, who may be less familiar with the epistemological underpinnings of evaluation, their epistemological framework is not always explicit and that’s a big problem. This framework affects how we justify and interpret chosen methods and not just inform the choice of methods used. Using a particular method doesn’t assume an epistemological framework, but rather how you would use and interpret the results. By not acknowledging our epistemological framework we are biased in our findings.
Lessons Learned:
Early in my career I worked with a community non-profit helping adults with special needs and their caregivers. During our work I decided to use a more post-positivist epistemology methodology driven by my experimental psychology background. It was a complete and utter disaster. I used a structured interview – no emotion, no follow up, just questions. It was the most awkward interview of my life. My ontology and epistemology biased my approach to methods. It didn’t fit the context and the results and interviewee suffered.
Rad Resources:
- Carter, S.M., & Little, M. (2007). Justifying knowledge, justifying method, taking action: Epistemologies, methodologies, and methods in qualitative research. Qualitative Health Research, 17(10). 1316-1328.
- Hatch, J. A. (2002). Doing qualitative research in education settings. Suny Press.
The American Evaluation Association is celebrating Theory and Practice week. The aea365 contributions all this week come from Dr. Ayesha Boyce and her University of North Carolina Greensboro graduate students’ reflections on evaluation theory and practice. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hi Justin –
I’m currently working as an evaluation coordinator at a nonprofit that works with hundreds of developmentally disabled adults across a wide geographic area. Something I’ve been struggling with is how to gather their feedback about our services. We currently have an annual client survey (75% -90% of which are filled out by parents/guardians) and have considered doing in-person interviews in the future (unlikely, though, as our resources are limited). Are there other articles/resources/materials that you would recommend for considering/embracing the epistemological experiences of such individuals? Your ‘failure’ is illuminating, but I’m more interested in what success it may have lead to.
Thanks so much!
Hi Justin – Great post! I really appreciate how you “failed forward” and shared the learning with us. We should all do the same.