Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

STEM Education & Training TIG Week: Measuring STEM Outcomes in Young Students by Kathy Dowell

Author Kathy Dowell

Hello! My name is Kathy Dowell and I am an evaluator at The Evaluation Group. In my 25+ years as a program evaluator I have worked with many STEM-focused programs to assess student outcomes. Recently, I have worked with STEM projects that target young students (Grade K-8), and one challenge I have consistently run into with this younger population is determining what outcomes to measure and how to measure them. Regardless of the STEM discipline that a program focuses on, my colleagues and I have found three primary outcome domains that are important to measure in our students: 1) interest and attitudes, 2) grit/self-efficacy in the STEM discipline, and 3) skill development. Within those three outcome areas, there are other issues to consider, such as:

  • How can we ensure reliable measurement of STEM outcomes in young students?
  • Which methods are best for assessing the three STEM outcome domains listed above, especially when working at the elementary and middle school levels?
  • What criteria should evaluators use when selecting and/or designing instruments for a STEM project?

Hot Tips

Consider contextual factors. As with any evaluation, really think about the context in which your program operates. Contextual factors, such as the age of your students, should be considered when selecting data collection tools and deciding on methods. For very young students, a hands-on skill assessment or classroom observations might make more sense than a survey.

Consider the scope of your program. What is the scope of your program and what outcomes are realistic given the nature of your program? What is the “dosage” of your program? For example, will a 2-week summer program have an impact on attitudes? Would an authentic assessment, like completing a STEM project, be more appropriate for a short-term intervention? As an example, students who are exposed to STEM programming in a classroom setting once a month may not have the same expected outcomes as those who participate in an intensive afterschool program. Failure to take these contextual factors into account can lead to unreliable results or an inability to find an impact.

Consider logistics (burden on teachers or program staff). In today’s education environment, teachers and staff have multiple demands on their time. When choosing a data collection tool or method, make sure teachers or program staff can accommodate your approach. For example, for some teachers a short survey might be possible, but classroom observations that require a longer time and work commitment may not. 

Talk to other STEM evaluators. Find out how other STEM evaluators are measuring STEM outcomes in young students. Many are happy to share tools and instruments and give advice.

Don’t try to measure everything. Better to get great data on one or two primary outcomes than low quality data on multiple outcomes.Use validated instruments where possible. Make sure the instruments you use have been validated with a population similar to the population you work with. This will help ensure that your results are valid and reliable and will stand up to rigorous review by the evaluation community.


The American Evaluation Association is hosting STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to AEA365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.