STEM TIG Week: Tips for evaluating equity-focused STEM education programs that have good intentions but mixed results by Elizabeth Pierson, Sophia Mansori, Jamie Kynn, and Sara Greller

Hello, we are Elizabeth Pierson, Sophia Mansori, Jamie Kynn, and Sara Greller from Education Development Center. Our research and evaluation work in STEM education brought us together for a panel presentation at AEA 2018 in Cleveland where we discussed the potential disconnects between a program’s equity and access goals and its implementation approaches.

With a diversity dearth in the majority of lucrative and stable STEM fields, federal agencies, cultural institutions, universities, and charitable foundations have set lofty goals for increasing access for underrepresented groups; they also hold high expectations for the outcomes of their programs. With these wide disparities in who is employed in the lucrative STEM fields, what responsibility do evaluators have, if any, to ensure that programs they are evaluating are meeting these social justice goals?

One of the many roles of the evaluator is to figure out how to hold up the right mirror at the right angle to help funders and program staff see their work differently. But speaking truth to a funder or program partner can be particularly challenging when it involves questioning a thoughtful and well-intentioned implementation model. In this post, we share some tips and lessons learned in two areas: 1) ways in which evaluators can best measure and collect data around equity even when that is not a program’s primary focus and 2) methods for communicating difficult truths when equity goals are not exactly aligning with a program’s equity outcomes.

Hot Tips: Measuring and collecting equity data

  • Evaluating a program in isolation can mean being less attuned to what is not immediately visible; a landscape review can situate a single program in a broader context and allow for cross program comparisons.
  • Relying solely on participant self-reported data can leave blind spots; the use of validated tools (such as the Dimensions of Success) can help detect what a program is missing or not doing.
  • Access and participation are just two dimensions of equity, but these data can be used as a gateway to important conversations about program goals and design.
  • For large scale implementation programs, measuring equity of access through publicly available data, such as NCES, might be the most affordable approach, but it is important to recognize and communicate the limitations of using such proxy data.

Lessons Learned: Communicating equity disparities

  • As a first step, understand how equity is being defined, and who is being included in that definition.
  • Focus on the positive before the negative; report what IS possible and what is working before highlighting the areas for growth and improvement.
  • Before writing a final report, do a mini pre-presentation to describe the findings and give the funder the opportunity to ask questions and engage with the data to ensure there are no surprises when they receive the final report.
  • Listen carefully to what funders and partners say, and do not say, during the mini pre-presentation. Refocus the final report if necessary to incorporate relevant components of the conversation and highlight areas of expressed interest.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

1 thought on “STEM TIG Week: Tips for evaluating equity-focused STEM education programs that have good intentions but mixed results by Elizabeth Pierson, Sophia Mansori, Jamie Kynn, and Sara Greller”

  1. Dear Elizabeth, Sophia, Jamie and Sara,

    Your article peaked my interest both as an inclusion support teacher, and as a Masters student currently taking a course in ‘Program Inquiry and Evaluation’. In this course, I am in the process of creating a program evaluation design for a wheelchair basketball program for youth, and your points about equity are timely, practical and helpful.

    We always hear about education as ‘the great equalizer’ (Horace Mann, 1848); this means allowing access to education for all. However, equality and equity are two different things. While equality means treating every student the same, equity means making sure every student has the support they need in order to be successful. Equity in education requires putting systems in place to ensure that every child has an equal chance for success. As an inclusion support teacher, that is a big part of my job; I am always looking for ways to engage my students meaningfully and practically in all school activities by giving them the tools they need to be successful alongside their peers.

    Not just in education but in programs of all kinds, program designers and implementers should be striving to ensure that there is equity for all participants. I really like your assertion that evaluation data on access and participation can drive important conversations about program goals and design. By looking at equity as one facet of program evaluations (even when stakeholders don’t expressly intend for this to form a part of the evaluation), evaluators can provide stakeholders with important feedback which can ultimately lead to program improvements. In this day and age, we should be aiming to ensure that there is equity in all programs as standard practice.

    Thank you for the other very practical and helpful tips you provided. Your recommendation to look at the actual ‘definition of equity’ of a program is an obvious but likely neglected component of evaluations; in my own program evaluation design, I will look at adding this component. As my wheelchair basketball program for youth is one of many programs offered by the organization, your tip to include a ‘landscape review’ makes a lot of sense to me as well. As a teacher, I know how important it is to 1) begin with the positives before discussing ways to improve, as well as 2) share evaluative feedback with students before they get their report cards (to eliminate surprises and hear their comments and rebuttals). Evaluations, be they of programs or of student performance, are conversations of sorts. Hopefully, the ways that programs ensure equity also become a natural part of that evaluative conversation as well.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.