Hello! We are Dr. Kristi Manseth and Regina Wheeler, Portland-based evaluators working for Pacific Research and Evaluation (PRE).
The Oregon Legislature enacted House Bill 3499 (HB 3499) in 2015 to develop and implement a statewide education plan for English Learners (EL) in the K-12 education system. Forty Oregon school districts were identified in Spring 2016 as having the greatest need to improve outcomes for EL students. For the past year, our team at PRE has been working with the Oregon Department of Education to evaluate HB 3499 both in terms of the implementation efforts as well as the outcomes for English Learners.
This project has reminded our team of how easy it is to become biased during our data collection activities and the importance of including all stakeholders in evaluation efforts. When the evaluation was initially designed, it was heavily weighted towards the voice of the school districts receiving the HB 3499 funds but did not include any data collection with ODE staff or the EL advisory committee. The EL advisory committee is made up of HB 3499 stakeholders including: an EL parent representative, district stakeholders, representatives from advocacy groups/nonprofit organizations, and educators and community members who advocated for the passage of the house bill. We quickly learned that there was more to this evaluation than understanding the districts’ experience, how they used their funding, and how this money has impacted outcomes for English Learners. It is also about understanding the story behind HB 3499, how the law can be successfully upheld moving forward, how districts can be more fairly evaluated, and how ODE can effectively support these efforts.
Lessons Learned: Our team at PRE, like many other evaluators, have started to stop and explore our own biases and perspectives as a key step in our evaluation process. Although many of us were trained to be objective researchers, we understand that the perspectives we bring to the work cannot truly be separated from the evaluation. Expanding data collection efforts and allowing for time to recognize and process our biases and perspectives has resulted in a more well-rounded and meaningful evaluation process.
Rad Resource: Check out MQP’s blog for more about confusing empathy with bias. Another fun resource we have been using for this project is Canva to create quick visually appealing deliverables. Canva is an affordable and user-friendly online application that allows those who are not savvy graphic designers (like us) to create visual content.
This week, AEA365 is featuring posts from evaluators in Oregon. Since Evaluation 2020 was moved from Portland, OR to online, a generous group of Oregon evaluators got together to offer content on a variety of topics relevant to evaluators. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Hello!
My name is Katie Vanderstelt and I am currently working through my Professional Masters of Education. I am currently wrapping up a program called Program Inquiry and Evaluation.
The core of my program inquiry has been on a literacy program that we run in our school called Reading Recovery. Although this program runs across Canada, my focus is on the implementation within my own school. Throughout this course, several of my colleagues have suggested ensuring that I am continually checking for bias, as in this scenario, I am the grade 1 teacher (stakeholder) AND the evaluator of the program which could easily cause bias in this evaluation.
Similar to your point about expanding data collection efforts, I considered a point made by Weiss (1998), that including clients (in this case, students in the program and their parents) in the data collection efforts may support bringing forth different perspectives and help address the inequalities that may influence the evaluation. My hope is that this helps support the decrease the amount of bias that occurs within this evaluation, and allows for a more meaningful evaluation.
I appreciate that as evaluators, you stopped and explored your own biases. Were they any key steps that you took in recognizing and processing your own biases within evaluations? As trained objective researchers, is there any advice you would give to support maintaining minimal bias within an evaluation?
I look forward to hearing from you!
Weiss, C. H. (1998). Have We Learned Anything New About the Use of Evaluation? The American Journal of Evaluation, 19(1), 21–33. https://doi.org/10.1016/S1098-2140(99)80178-7
Greetings!
My name is Shawn Skalinski, and I am currently doing my masters in the PME program at Queens University. We are just finishing up a course called, “Program Inquiry and Evaluation” where I have, for the first time, learned about many aspects of the fascinating world of evaluation. One comment you. made in this particular blog which caught my eye was, “Expanding data collection efforts and allowing for time to recognize and process our biases and perspectives has resulted in a more well-rounded and meaningful evaluation process.” Of course we learned about the importance of recognizing bias and some ways to help make an evaluation more credible and less biased; however, I am very curious about specific strategies or procedures you do during an evaluation which allow you to lesson the bias in an evaluation. Aside from including more than one evaluator in a program evaluation, what other specific actions can be taken, in your experience? I am looking forward to hearing your insights!