Hello! We are Rebecca Teasdale, Lindsay Maldonado, and Cecilia Garibay—three Chicago-based evaluators who practice in museums, aquariums/zoos, libraries, and other informal learning contexts. Given that Chicago is known for its world-class cultural institutions, it’s a great home base for our work. Today, we’re pleased to share three lessons we’ve learned about evaluation of informal learning experiences.
Developing methods. Evaluation of informal learning is a young and growing area of practice. As the field has come of age, we’ve learned that many of the evaluation methods developed for structured, compulsory learning environments are a poor fit for the self-directed, free-choice contexts we examine—prompting evaluators to develop new methods. Beverly Serrell developed some of the earliest museum-specific methods, including a criteria framework for exhibition evaluation and methods for examining how people move through exhibitions. Recently, an issue of New Directions for Evaluation (NDE) reported on current work to develop new, creative methods that are appropriate for these complex environments.
Common constructs. Informal learning experiences that focus on science, technology, engineering, and mathematics (STEM) often seek to foster STEM interest, engagement, and identity. But these are challenging constructs for evaluators to define and measure. Recently, we’ve drawn on interviews with leading researchers conducted by a task force with the Center for the Advancement of Informal Science Education (CAISE) to help us consider how to best conceptualize and study these constructs in STEM-focused and other informal learning contexts.
Equity and inclusion. Historically, museums have been oriented toward visitors from privileged segments of society. A recent report from the American Alliance of Museums and a toolkit and report from another CAISE task force highlight steps museums can take to foster greater inclusion and equity. We’ve seen the key role that evaluation can play in these efforts by foregrounding the experiences of communities that have been marginalized and using evaluation data to help program staff examine their practices and assumptions (check out this recent NDE chapter for examples).
Informalscience.org is an essential resource from CAISE that includes a repository of evaluation reports; an instrument clearinghouse; the interviews, report, and toolkit discussed above; and other Rad Resources.
The American Evaluation Association is celebrating Chicagoland Evaluation Association (CEA) Affiliate Week. The contributions all this week to aea365 come from CEA members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.