WMU Scribing: Michael Kiella on Social Science Standards and Ethics

My name is Michael Kiella. I am a student member of the American Evaluation Association, and a doctoral student at Western Michigan University in Kalamazoo Michigan. I served as a session scribe at Evaluation 2010 for Session 393: Research on Evaluation Standards and Methods. For this post, I will focus on the presentation by Dr. Linda Mabry (Washington State University at Vancouver) entitled Social Science Standards and Ethics: Development, Comparative Analysis, and Issues for Evaluation.

Lessons Learned:

1. Justification is not equivalent to doing the right thing.

Dr. Mabry indicated that ethics within our profession is not an answer for all time, but a sequence captured in context and history. She wants us to know that there is an historical backdrop in the historical development of ethical standards for modern times and has selected the Nurnberg War Trials, the Declarations of Helsinki, and the Belmont report as standard.

Dr. Mabry argues that there must be a standard of ethics which applies within social science and evaluation efforts. She offers the Professional Standards of the American Psychological Association (APA), and the American Evaluation Association (AEA) as evidence that the practitioners in these fields have addressed the issue. Yet, these standards remain problematic.

2. Is the presumption of compliance enough to be compliant?

These features are problematic because they do not include enforcement components, and both explicitly indicate that the standards do not establish a baseline of liability. Dr. Mabry suggests that a possible alternative is that government has a role in enforcing professional standards where human subjects are used in research.

3. It is reasonable for government to exercise its authority over our research endeavors.

Dr. Mabry argues that it is the legitimate place for government to exercise its role as an enforcement agency to balance the extraction of data for the public good with the protection of the subjects from which the data are extracted. But this too is problematic because the American Evaluation Association has not agreed on a common definition of what evaluation really is. The establishment of oversight committees with enforcement authority is difficult because the definition of Evaluation is so very broad and the extent of our practices is so varied that we are unlikely to agree upon compliance criteria.

4. Cultural Sensitivity as an arena for new standards.

Dr. Mabry proposes that in order to appropriately evaluate culturally distinctive features, we are required to make the strange familiar. The nuance of culture may not be immediately observable or understood; feasibility remains in conflict with ethical research.

At AEA’s 2010 Annual Conference, session scribes took notes at over 30 sessions and we’ll be sharing their work throughout the winter on aea365. This week’s scribing posts were done by the students in Western Michigan University’s Interdisciplinary PhD program. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.