Hi, we’re Southeast Evaluation Association (SEA) members Taylor Ellis, a doctoral student and lead evaluator, and Dr. Debra Nelson-Gardell, an Associate Professor, providing consultation at the School of Social Work at The University of Alabama. We form a team tasked with evaluating a program providing community-based, family-inclusive intervention for youth with sexual behavior problems (youngsters who lay people might call juvenile sex offenders). This post focuses on our lessons learned regarding our approach to resistance in program evaluation.
Taut and Alkin (2002) reported people stereotypically view program evaluation as “being judged…that the evaluation is used to ‘get me’, that it is not going to be used to assist me but is perceived to be negative and punitive in its nature” (p. 43). Our program evaluation faced derailment because the program had never been evaluated before, or perhaps because of the inevitability of resistance to evaluation. Accepting the resistance as normal, we tried addressing it. But, our efforts didn’t work as we had hoped. Below are the hard lessons learned through “hard knocks.”
Lessons Learned:
- The Importance of Stakeholder Input: Stakeholders need to believe evaluators will listen to them. Early in the evaluation process, stakeholders were interviewed and asked about their ideas for program improvement to promote engagement in the process. What the interviews lacked was a greater emphasis on how what stakeholders said affected the evaluation.
- Remember and (Emphatically) Remind Stakeholders of the Evaluation’s Purpose/Goals: During the evaluation, the purpose of the evaluation was lost in that stakeholders were not reminded of the evaluation’s purpose. Project updates to stakeholders should have been more intentional about movement towards the purpose. We lost sight of the forest as we negotiated the trees. This lack of constant visioning led to many stakeholders viewing the evaluation implementation as an unnecessary hassle.
- The Illusion of Control: Easily said, not easily done: Don’t (always) take it personally. Despite our efforts, a great deal of resistance, pushback, and dissatisfaction remained. After weeks of feeling at fault, we found out that things were happening behind the scenes over which we had no control, but that directly affected the evaluation.
Knowing these lessons earlier could have made a difference, and we intend to find out. Our biggest lesson learned: Resist being discouraged by (likely inevitable) resistance, try to learn from it, and know that you are not alone.
The American Evaluation Association is celebrating Southeast Evaluation Association (SEA) Affiliate Week with our colleagues in the SEA Affiliate. The contributions all this week to aea365 come from SEA Affiliate members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Thank you for your feedback and encouragement, JP! We really appreciate it. If you don’t mind, I would love to hear more about how to get evaluands to use the methods themselves (i.e. what does that look like in practice). I tried educating, but did not take this step.
Love your post, and thank you for your honesty. I wonder if a better way to approach this resistance against evaluation would be to educate evaluands right at front about methods used in evaluation, and ask them to use these methods themselves. This participatory approach will transform the evaluands in active participants of the evaluation, rather than people who just receive inputs from our work.
I strongly agree with your second bullet point about reminding the purpose/goals of the evaluation. That’s really fundamental, and I think very often we may lose track of that.
And, YES! Do not take it personally!