Hello! I am Lisette Nieves, founder of Year Up NY, a service organization that has happily and successfully used an experimental evaluation to assess program effectiveness. Today’s blogpost reflects on administrative challenges that need not get in the way of using experiments in practice. I strongly believe in the nonprofit sector and what it does to support individuals in overcoming obstacles and building competencies to be successful. I also know that people in this sector want to know the impact of their efforts. With this understanding in mind, choosing to use an experimental evaluation at Year Up NY was not difficult, and the journey offered three key lessons.
Lesson #1: Evaluation involves change, and change poses challenges.
Although everyone on the team agreed to support evaluation, the frontline team members—those who worked closest with our young adults—found it difficult to deny access to those seeking program enrollment. Team members’ buy-in was especially challenging once the names of prospective participants were attached to an experimental pool, personalizing the imminent selection process into treatment and control groups. As a committed practitioner and program founder, I found it important to surface questions, ask for deeper discussions around the purpose and power of our evaluation, and create the space for team members to express concerns. Buy-in is a process with individualized timetables; staff may need multiple opportunities to commit to the evaluation effort.
Lesson #2: Program leaders tend to under-communicate when change is happening.
Leading a site where an experimental evaluation was taking place forced me to use language that shepherded staff through a high-stakes change effort. Team members worried if the results would surprise us (although prior monitoring implied we were on track). The evaluation became central to weekly meetings where staff engaged in a healthy discussion about our services and how we were doing. With information on attrition patterns, even the most cautious staff members began to fully buy-in to the experimental evaluation. In the end, evaluation was about making us stronger and demonstrating impact—two key values that we as a team were wedded to with or without an experimental evaluation.
Lesson #3: Experimental evaluation is high stakes, but it can be hugely informative.
An experimental evaluation has many challenges, and some of its requirements are challenging (but not insurmountable) to implement among the social service providers nationwide. But I have no regrets for engaging in an experimental evaluation: we learned more about our organization and systems than we would have otherwise. Experimental evaluation made us a true learning organization, and for that I encourage other organizations to consider taking evaluation efforts further.
Up for discussion tomorrow: more things you thought you couldn’t learn from an experiment but can!
The American Evaluation Association is celebrating the Design & Analysis of Experiments TIG Week. The contributions all week come from Experiments TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.