Hi, I’m Deborah Good, Director of Organizational Learning and Research for a nonprofit called Future Focused Education. Our organization works toward individual and system changes, always with an eye toward our city’s most vulnerable young people. We run a paid internship program for high school students, support a network of innovative schools, elevate student voices, and advocate for policies that nurture student learning.
As an internal evaluator, I have the fun of working closely with my colleagues as they design, plan, and implement initiatives, while doing my best to maintain an evaluator’s perspective. We also contract with external evaluators, who provide an invaluable point of view and frequently employ more robust methodologies than we use inhouse. That said, the decision to invest in internal evaluation has proven indispensable to our organization.
I am learning lessons along the way. Here is my brief guidebook to success as an internal evaluator.
- Prioritize continuous improvement. I strive to use data as a flashlight, not a hammer, but it is common in the nonprofit and social service arenas to assume internal evaluation is for compliance. In a previous job where I served as the quality improvement coordinator, “improvement” was only in the job title. The data I collected was primarily used to demonstrate to our higher-ups and the State of New Mexico that we were in compliance with rules and regulations. Now I gently remind colleagues that grant reporting is important, but using data to improve is our North star.
- Regularly revisit your evaluation questions. I have adapted Results Based Accountability (RBA) to the needs of our organization. This means my go-to research questions are (a) How much did we do? (b) How well did we do it? and (c) Is anyone better off? I am learning not to limit myself to these three questions, and to listen everywhere for interesting questions in our work. For example, today’s question is: Does students’ school attendance correlate with their attendance at internships?
- Get creative with your methods. RBA traditionally uses quantitative measures, but I always use varied sources, including qualitative data. In addition to surveys, interviews, and observations, what about using mobile applications like ImBlaze, facilitating discussions using unique protocols, or analyzing hand-drawn ecomaps?
- Lead frequent data dives with staff, partners, and young people. Who better to make sense of data than the people who live it? When leading data dives, I usually hand out graphs, tables, and qualitative data summaries without significant interpretation. I then lead a discussion on what is interesting and surprising, and what implications the data might have for planning next steps and future strategy.
For more, contact deborah@futurefocusededucation.org.
The American Evaluation Association is celebrating New Mexico (NM) Evaluators (www.nmeval.org) Week. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hello,
I am an elementary school teacher in Québec City, Canada and found your piece very interesting. Where I teach, my colleagues and I use evaluation as a “flashlight” in our own classrooms, yet the school board only seems to be interested in “hammer” data for their own ends. We evaluate students in grades 2, 4, and 6 at the elementary level, with 2 and 4 being school board wide exams and 6 being provincial exams. These exams do not seem to be used to improve things, as we never hear about them again after we submit the results.
That said, the Ministry of Education gives us a general framework of targeted goals for each grade by the end of the year, but each teacher is responsible for getting their students there (no mandated curriculum) so it ensures that each teacher must use their evaluations as a flashlight to shine light on where their students are and how to push them further with that knowledge in mind. This reminds me of your third tip of using multiple methods. We use lots of qualitative as well as quantitative evaluation methods as our report card evaluation guidelines allow for professional judgement and for example, how much support the student required that term. If a student scored 60% on paper in math but needed me to sit with him for help many times, then he would be “passing” on paper but not in reality and his mark would reflect that and be lowered according to the amount of support he needed. Another example would be an undiagnosed student with suspected dyslexia writing an exam and the meaning/understanding getting lost. If I have that student tell me what he wants to put down on paper but can’t, I can better gauge his learning. I am thankful that my school board (and Ministry of Education) trusts teachers professional judgement enough to allow for some wiggle room!
Thanks for a great read!
Fiona Williams