I’m Kim Kelly, PhD, from the Psychology Department at the University of Southern California where I teach courses in statistics and research methods. I have been involved in the evaluation of STEM curriculum and professional development programs since 2002. I have been reflecting on the career path that led me from basic research in psychological science to an independent program evaluator of STEM education initiatives. I offer two insights that have been instrumental in my own professional journey from research to evaluator.
Rad Resources: Social scientists in particular struggle with the distinction between research and evaluation. To be honest, I still struggle with this distinction, and there are many varieties of opinion on the matter. It’s worth the time to consider published ideas, not to end the debate, but to consider the goals and methods of research and evaluation in order to appreciate the practical and intellectual differences between the pursuit of generalizable knowledge in research and the program specific feedback needed in most program evaluations. Gene Glass wrote about this back in 1971 in Curriculum Theory Network and the subject regularly appears in books and journals. See more recent comments in Jane Davidson’s Editorial in the 2007 Journal of Multidisciplinary Evaluation and by Miri Levin-Rozalis in the 2005 Canadian Journal of Evaluation. Reflecting on this key distinction has enabled me to appropriately refine my deep knowledge of the goals and methods of psychological science research to become a more effective program evaluator.
Cool Trick: It may seem like a no-brainer to suggest establishing a good relationship with those we evaluate or evaluate for. The training of researchers often emphasizes a detached, objective approach to interaction with participants. Further, participants are typically cooperative as they have often volunteered to participate. When I first began program evaluation, I failed to appreciate the interpersonal dynamics associated with evaluations—the perceptions of threat often experienced by participants and clients, the reality of unwilling participants and investigators, and the barriers this lack of trust posed to obtaining valid data. In my work with programs, I emphasize rapport building on both social and programmatic levels to build trust. Rapport building at a programmatic level includes looking for ways to make evaluation data more useful and utilized as part of program development. For example, I shared results of content knowledge assessments with teachers in a metacognitive reflection activity. Being both a familiar and friendly face maximizes the likelihood that you will get the access and cooperation you need to do an effective program evaluation.
Kim Kelly is a leader in the newly formed STEM Education and Training TIG. Check out our TIG Website for more resources and information.
The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Kim, thanks for the shout out! 🙂
Just wanted to provide an updated link for the JMDE editorial (I think you are referring to the Unlearning Some of Our Social Science Habits one?): http://journals.sfu.ca/jmde/index.php/jmde_1/article/view/68/71 (JMDE moved hosts, so the URL changed – sorry!).