My name is Megan Olshavsky and I’ve been an evaluator of PreK-12 educational programs for about a year and half now. Before starting my work in a public school district, I was researching learning and memory processes in rats, earning my Ph.D. in Psychology – Behavioral Neuroscience. My experiments were very controlled: the rats exhibited the behavior or they did not, neurons were active or they were not, results were statistically significant or they were not.
Moving from that environment to the “real world” of a school district which employs and serves humans in all their messiness caused some growing pains. How was I supposed to decide whether an educational intervention lead to academic improvement without proper control and experimental conditions?! One of the first projects I’ve worked on is a developmental evaluation of a technology initiative. Developmental Evaluation made me feel ever more flakey –“Hey everyone! Let’s monitor things as they unfold. What are we looking for? Not sure, but we’ll know it when we see it.”
As I’ve transitioned from researcher to evaluator, three things have helped me feel more legit.
Lesson Learned 1: Trust yourself. You may not be an expert in the area you are evaluating, but you do have expertise looking at data with a critical eye, asking probing questions, and synthesizing information from a variety of sources.
Lesson Learned 2: Collaborate with a team who has diverse expertise. Our developmental evaluation team engaged teachers, instructional technology specialists, information systems staff, and evaluators. When everyone on that team can come to the same conclusion, I feel confident we’re making the right decision.
Lesson Learned 3: Embrace capacity building as part of your work. No one would recommend training-up stakeholders to do their own inferential statistics. You can, however, influence the people around you to be critical about their work. Framing is critical. “Evaluation” is a scary word, but “proving the project/program/intervention is effective” is a win for everyone. Building relationships and modeling that expertise we talked about in Lesson #1 leads to gradual institutional shift toward evaluative thinking.
Rad Resource: Notorious R.B.G: The Life and Times of Ruth Bader Ginsburg. Let RBG be your guide as you gather and synthesize the relevant information, discuss with your diverse team, and advocate for slow institutional change.”
AEA is celebrating GSNE Week with our colleagues in the Graduate Student and New Evaluators AEA Topical Interest Group. The contributions all this week to aea365 come from our GSNE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.