My name is Marian Dyer and I write today from the perspective of a Ph.D. student in Research and Evaluation in Education who is also a new assistant superintendent for a suburban school district serving about 5000 students.
As part of an advanced course in program evaluation, I worked with two classmates to evaluate the new teacher induction program in my school district. We engaged in a moderately participatory evaluation, with my evaluation team providing significant structure to the evaluation. As a school-level leader at that time, I was invested in strengthening this program to grow and retain beginning educators in the district. I knew our recommendations, if implemented, would support developing teachers more effectively. When I was named interim assistant superintendent late that spring, I looked forward to having the authority to implement the changes described in the recommendations.
In my new role as assistant superintendent, I also undertook an evaluation of the district’s processes for registering new students. District staff and families criticized the existing process – families weren’t always sure where to get information, district personnel did not know how to help families with the online piece, and school administrators lacked clarity about which staff were responsible for which activities. These concerns occasionally caused delays in students starting school and created a less-than-ideal first experience in the district for some families. My proposed formal evaluation of this process was met with great enthusiasm!
Lessons Learned
Now that I’m on the other side of both of these evaluations, I’ve learned two valuable lessons:
Investing time in planning the evaluation is worth it. Taking the time to identify the goal of evaluating the program focuses the work on that outcome and relieves you from having to consider everything else. Evaluating the registration process, I had limited resources to work with, and so a complete overhaul of registration was out of the question; instead, I evaluated how efficiently the process used the available resources to achieve its goals. By using targeted questions in my interviews with staff, I obtained precisely the information I needed and little extra, saving time.
Even with evidence-based recommendations, effecting change is hard. I thought it would be smooth sailing to implement recommended changes to the mentoring program. While some things were immediately within my control (e.g., communicating about recruitment of a diverse pool of mentors), others required investment from staff members (e.g., involving school administration in mentoring decisions) and others were reluctant to go along with some changes. Staffing changes threw a wrench in the revision of the registration process, at least temporarily, but I’m optimistic about implementing the revisions over the summer.
Even with these challenges, the deliberate and collaborative process of evaluating these programs and adopting recommendations has reaped benefits. As a result of the evaluations and recommendations, a more diverse pool of educators participated in mentor training and new staff who took on registration tasks received effective training. These positive experiences with evaluation have led to support for an upcoming evaluation of the new math curriculum implementation. Over time, fostering a positive and professional culture of evaluation in the district will engage more partners in the work and result in better buy-in for implementing recommendations.
The American Evaluation Association is hosting PreK-12 Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to AEA365 come from our PreK-12 Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.