Hello! I am Rebecca Teasdale, a doctoral student in Educational Psychology specializing in evaluation methodology at the University of Illinois at Urbana-Champaign. I’m also a librarian and have served as an administrator and science librarian in public libraries. My current work focuses on the evaluation of interest-driven learning related to science, technology, engineering and math (STEM) that takes place in public libraries and other informal learning settings.
I first became involved with the Building Informal Science Education (BISE) project as an intern at the Science Museum of Minnesota while I was pursuing a certificate in evaluation studies at the University of Minnesota. (See my blog post, “Measuring behavioral outcomes using follow-up methods,” to learn more). Now, I’m using the BISE database to support my research agenda at Illinois by identifying methods for evaluating the outcomes of public library STEM programming.
Evaluation practice is just getting started in the public library context, so few librarians are familiar with evaluation methods measuring mid- and long-term outcomes of informal science education (ISE) projects. I used the BISE reports to provide a window into understanding (a) the types of outcomes that ISE evaluators study, (b) the designs, methods and tools that they use, and (c) the implications for evaluating the outcomes of STEM programs in public libraries.
Lessons Learned:
- I’ve found little standardization among the evaluation reports in the BISE database. Therefore, rather than provide a single model for libraries to replicate or adapt, the BISE database offers a rich assortment of study designs and data collection methods to consider.
- Just 17% of the reports in the BISE database included the follow-up data collection necessary to examine mid- and long-term outcomes. In particular, library evaluators should ensure that we design studies that examine these effects as well as more immediate outcomes.
- Collecting follow-up data can be challenging in informal learning settings because participation is voluntary, participants are frequently anonymous, and engagement is often short-term or irregular. The reports in the BISE database offer a number of strategies that library evaluators can employ to collect follow-up data.
- All five impact categories from the National Science Foundation-funded Framework for Evaluating Impacts of Informal Science Education Projects are represented in the BISE database. I’m currently working to identify some of the methods and designs for each impact category that may be adapted for the library context. These impact categories include:
- awareness, knowledge or understanding
- engagement or interest
- attitude
- behavior
- skills
Rad Resource:
- I encourage you to check out the BISE project to inform evaluation practice in your area of focus and to learn from the wide variety of designs, methods, and measures used in ISE evaluation.
The American Evaluation Association is celebrating Building Informal Science Education (BISE) project week. The contributions all this week to aea365 come from members of the BISE project team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.