AEA365 | A Tip-a-Day by and for Evaluators

TAG | Informal Learning

Sarah Cohn, Science Museum of Minnesota, and Scott Pattison, Oregon Museum of Science and Industry, here, members of the CASNET research team. Together we led NISE Net’s evaluation capacity building (ECB) initiative, Team-Based Inquiry (TBI), “. . . a practical approach to empowering education professionals to get the data they need, when they need it, to improve their products and practices and, ultimately, more effectively engage public and professional audiences.” We created TBI as a professional development opportunity and embedded evaluation resource for science museum staff within the Nanoscale Informal Science Education (NISE) Network. It draws heavily from participatory evaluation, action research, practitioner inquiry, and ECB.

TBI is a 4-step ongoing cycle of inquiry meant to guide museum and education professionals with little experience in evaluation and research successfully through the inquiry process.

  • Question– Team members identify and prioritize the inquiry question(s) that guide the project and address challenges that have arisen in the team’s work.
  • Investigate- The team collects data to answer its question(s), using methods appropriate to the study’s goals and realistic given time and resource constraints.
  • Reflect– This involves discussing and analyzing data to identify key findings and lessons learned while fostering a shared understanding within the team.
  • Improve- The team identifies, prioritizes, and implements changes in products or practices based on TBI findings. Teams identify new questions that feed into an ongoing inquiry cycle.

Lessons Learned:

  1. Make time for training – Since 2011, we’ve trained professionals through: 75-minute conference presentations; 3-hour workshops; 6-hour workshops; 2-day trainings; and 6-8 month cohorts that include hour-long phone calls and a two-day in-person meeting. We’ve seen 3-hour workshop participants walking away feeling somewhat comfortable with the process and capable of implementing it in their own work, while 75-minute session participants did not.
  2. Provide examples – Stories and examples, especially those from other practitioners, are most effective at providing a sense of what TBI looks like in real life. We’ve integrated many examples of TBI from different settings into our trainings to ensure participants get the most complete picture of how TBI, and evaluation, can help them in their daily work. Sample TBI studies can be found here.
  3. Connect online and in person– Two cohorts (18-20 people from 9-10 institutions) used online meetings for the bulk of TBI training. They grasped the content, but were not ready to fully realize the process or its utility until we met in person to analyze data together. Seeing the TBI cycle to completion was definitely important, but putting faces to names and voices and talking in depth with each person played an immensely important role in participants’ engagement, understanding, and sense of community.

Rad Resources: Free TBI resources:

The American Evaluation Association is celebrating Complex Adaptive Systems as a Model for Network Evaluations (CASNET) week. The contributions all this week to aea365 come from members of the CASNET research team. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hello, I am Rupu Gupta, Analyst at New Knowledge Organization Ltd. and Co-Chair of AEA’s Environmental Program Evaluation Topical Interest Group. My evaluation work focuses on learning about the environment and conservation in informal settings. As we celebrate Earth Day, I would like to share some reflections on evaluating these experiences.

Lessons Learned: Informal learning settings are critical to learn about the environment and actions to protect it. Informal learning settings offer opportunities for “free-choice” learning, where the learners choose and control what they learn. They are typically institutions such as zoos, botanic gardens, aquariums, and museums, distinct from formal educational settings like schools. With hundreds of millions of visits to these institutions annually, they are prime settings to engage the public in thinking about the environment. Conservation education is often a key aspect of these institutions’ programming, where visitors can learn about different forms of nature (e.g., animals, natural habitats), threats they face (e.g., climate change), and actions to address them (e.g., reducing energy use). Educational experiences here are often referred to as informal science learning for their connection with understanding natural systems.

Learning about the environment in informal learning settings can happen through a variety of experiences. Informal learning is socially constructed, through a complex process that includes oneself, close others (friends, family) and more distant others (institution staff). Specific experiences, like animal encounters, hands-on interactions with flora in botanic gardens, or media-based elements (e.g., touch screens) enable visitors to engage with information about nature and the environment. Docents play an important role in helping visitors ‘interpret’ the message embedded in the experiences and exhibits. Evaluators assessing the impact of the different experiences in informal settings, need to be mindful of the multiple pathways for visitors to engage with the environmental information.

Informal learning manifests broadly. Learning experiences in informal settings encompass outcomes, beyond learning traditionally associated with school-based education. In the process of making meaning of the various experiences, learning is tied to the multiple aspects of the human experience. They can be cognitive (e.g., gaining knowledge about climate change impacts), attitudinal (e.g., appreciating native landscapes), emotional (e.g., fostering empathy towards animals) or behavioral (e.g., signing a petition for an environmental cause). A mix of qualitative and quantitative methods are best to capture the complex learning experiences. By considering the range of learning possibilities, evaluators can design and conduct effective evaluations to understand how people engage with the multi-faceted topic of the environment.

Rad Resources: The following are great to get acquainted with evaluation in informal learning settings:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Hi there, Liz Zadnik here, bringing you another Saturday post focused on practitioner experiences and approaches. Today I’m going to focus on my personal journey to stay up-to-date and relevant in all things evaluation.

I was not formally trained as an evaluator – everything I know has been learned or gained through on-the-job hands-on experiences and mentorship (I’m very lucky to have been able to work with a few brilliant evaluators and researchers!). Self-study, reading, and ongoing training have been intentionally incorporated into my personal and professional schedule.

Rad Resource: Coursera is an excellent resource for online learning. You can even get certifications in concentrations after completing a set of courses in sequence. They have a number of courses around data analysis and data science!

Rad Resource: iVersity coordinates and archives some really interesting and innovative massive open online courses (MOOCs). The “Future of Storytelling” course gave me a number of ideas and skills for crafting accessible and engaging trainings and resources, as well as some insights for capturing stories for program evaluation. Recent and future courses focus on idea generation methods and gamification theory.

Lesson Learned: Follow your gut! At first I thought I needed to select courses, books, and resources that were explicitly “evaluation-y,” but found it was those courses that made me say “Oooh! That looks interesting!” helped me think creatively and find ways to enhance my evaluation and program development skills.

Rad Resource: MIT Open Courseware is much more structured and academic, as these are courses held at MIT. These require – for me – a bit more organization and scheduling.

Rad Resource: edX is another great chance to engage in online courses and MOOCs. Right now they have two courses on my “to-take” list: Evaluating Social Problems and The Science of Everyday Thinking.

Are there other online course providers or resources you rely on to stay current? How do you stay up-to-date and innovative as you balance other obligations and projects?

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hey there. My name is Kathleen Tinworth and I am the Director of Visitor Research & Program Evaluation at the Denver Museum of Nature & Science. It’s my pleasure to finish up the week of visitor studies posts.

For many of you AEA365 readers, this week was the first time you’d ever heard of visitor studies. I know the feeling! I didn’t always work in a museum; I have only been in the sector for 3 years. I remember when I first read the job description for my position. I thought, “Evaluation in a museum? Incredible! That sounds like the coolest job ever.” Turns out I was right! At the time, I hadn’t considered the valuable role evaluation can and should play in cultural institutions.

Before jumping into visitor studies, I designed and conducted evaluations for criminal justice agencies and child welfare organizations. I got linked into AEA through that work and have stayed connected. One of the things I love about AEA is the exposure to and connections across different disciplines. I am always inspired by how adaptable and relevant methodologies, instruments, and data collection techniques can be across diverse fields. I hope this week’s AEA365 provided you with some new, fun, and creative ways to think about evaluation in your own work through the lens of visitor studies.

The VSA AEA group is really looking forward to meeting many of you at the AEA conference this November in San Antonio! Make sure you look out for us and say hello!

Here are some of my favorite “rad resources” online to further introduce you to the world of visitor studies:

Rad Resource: The Visitor Studies Association (VSA, http://visitorstudies.org/) is committed to understanding and enhancing visitor experiences in informal learning settings through research, evaluation, and dialogue. All the AEA365 bloggers this week are active members (including the current VSA president herself, Kirsten Ellenbogen). An absolute essential on the website is the full searchable archive (http://www.visitorstudiesarchives.org/) where you can access and download some fantastic articles and papers—for FREE!

Rad Resource: CARE (which stands for Committee on Audience Research and Evaluation, http://www.care-aam.org/) is a standing professional committee within the American Association of Museums. Check out their directory of evaluators (http://www.care-aam.org/documents/directory_of_evaluators/directory_of_evaluators_2009.pdf) to find someone in your area!

Rad Resource: Informal Science online (http://informalscience.org/) isn’t all about science. It’s a phenomenal resource and online community for informal learning projects, research and evaluation. You will find news, interviews from the field, funding opportunities, upcoming conferences and events—not to mention a virtual clearinghouse of published and unpublished studies and results. Like the VSA archives, it’s FREE!

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. We are pleased to welcome colleagues from the Visitor Studies Association – many of whom are also members of AEA – as guest contributors this week. Look for contributions preceded by “VSA Week” here and on AEA’s weekly headlines and resources list.

· ·

My name is Kirsten Ellenbogen. I’m Director of Research & Evaluation at the Science Museum of Minnesota and President of the Visitor Studies Association.  I hope you’re enjoying VSA Week on AEA365.

Rad Resource: Development of individual identity has for some time been considered an outcome of informal learning environment experiences. But identity has recently become more central in the field with the 2009 report by the National Academy of Sciences (NAS): “Learning Science in Informal Environments.” The report identifies and provides evidence for six “strands of learning” that occur in informal learning environments. What’s so rad about this? NAS reports are based on systematic reviews of literature that use strict criteria for what counts as good evidence. This report is unique in the strength and systematic nature of the evidence for learning in informal environments that it provides. You can read (and search) the entire book online or purchase a copy:  http://www.nap.edu/catalog.php?record_id=12190

Cool Trick: Two evaluation approaches that are particularly useful for gathering data about identity development in informal learning environments are embedded evaluation and reflective interviews. Embedded evaluation integrates “invisible” tools for evaluation into the existing program activities. For example, in a youth program that has a focus on interactive media, the projects produced by youth are posted to online environment. http://info.scratch.mit.edu/ScratchR:_The_Online_Community The projects can be downloaded by others, modified, and reported. All activity in the online community can be tracked, and the ongoing development of the youth’s projects can be analyzed for more detail.

Cool Trick: Another evaluation approach useful for gathering data on identity development in informal learning environments is video-based reflective interviews. For example, videotaping a museum visitor using an exhibition (using IRB-approved informed consent as appropriate). In the post interview, after the initial set of questions, show the visitor a video segment of his or her interactions with the exhibition that was taped just moments before. Use a semi-structured interview approach and ask the visitor to narrate their video and tell you more about what they were doing. This approach can become somewhat automated using technologies like Video Traces. http://depts.washington.edu/pettt/projects/videotraces.html

Hot Tip: There’s an app for that. There are reflective tools that support annotation of images, audio or video diaries, and other approaches that support the evaluation of identity development.  Take a look at Everyday Lives or Storyrobe as a great starting point. These apps are useful for you to use as the evaluator or can be added to the participants phone, iPod, iPad or other device. Adding a tool like this to a device that a participant regularly carries around allows ongoing data collection that is reflective and in some instances, embedded. This makes them ideal tools for monitoring identity development.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. We are pleased to welcome colleagues from the Visitor Studies Association – many of whom are also members of AEA – as guest contributors this week. Look for contributions preceded by “VSA Week” here and on AEA’s weekly headlines and resources list.

· · ·

Hi! I’m Carey Tisdal, Director of Tisdal Consulting, an independent firm that evaluates informal learning environments. Informal learning environments include museums (art, history, science, and children’s museums), science-technology centers, zoos, aquaria, parks, television, and radio. I worked as an internal evaluator for nine years and for six as an external evaluator. Recently, field-building and professional developments have been a focus of several projects funded by the National Science Foundation. I am evaluating one of these projects, ExhibitFiles. ExhibitFiles is an online community for exhibit designers and exhibition developers. One goal of the site is to provide a place where exhibition developers find out about each other’s work. Members can upload case studies, reviews of exhibits they have visited, and useful “bits” about the exhibit design processes and materials. Evaluation reports may be attached to case studies. A related goal is the development professional networks for the sharing of expertise. Registered members post profiles and contact information. My Visitor Studies Week blog for AEA365 shares an important insight about continuing to learn as we do our work.

Lessons Learned: Actually, lessons re-learned! In this project, the client and I have found formal theory very helpful in thinking about the site and understanding how people use it. I was reminded of Kurt Lewin’s wonderful 1951 pronouncement that “There is nothing so practical as a good theory.” We found theories comparing and contrasting communities of practice and communities of interest using of digital information (Hoadley & Kilner, 2005) especially helpful in understanding how exhibition developers incorporated the site experience into their work. For example, specific reviews are sometimes serving as boundary objects for people working in different disciplinary areas and with different training and experiences to develop a common language about a design topic. Since this site is only one element in a range of professional development activities, we have used concepts about the ecology of learning (Brown, 1999) to start understanding the role of ExhibitFiles as one among a set of professional development activities in which exhibition developer participate. Using a theoretical lens as part of the evaluation has helped the project team (clients) and the evaluators develop a common language and set of ideas to support their decisions about updating site and planning its future. Formal theory can sometimes be a boundary object for evaluators and clients.

Rad Resource

Brown, J.S. (1999). Presentation at the Conference on Higher Education of the American Association for Higher Education. Retrieve August 15, 2010 from http://serendip.brynmawr.edu/sci_edu/seelybrown/.

Rad Resource

Hoadley, C.M. & Kilner, P.G. (2005). Using technology to transform communities of practice into knowledge-building communities. SIGGROUP Bulletin, 25(31).

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. We are pleased to welcome colleagues from the Visitor Studies Association – many of whom are also members of AEA – as guest contributors this week. Look for contributions preceded by “VSA Week” here and on AEA’s weekly headlines and resources list.

· ·

My name is Cheryl Kessler and I am an independent consultant doing visitor studies. I conduct program and exhibit evaluations in museums and libraries to understand what and how visitors/users learn from their experiences, what they might do with that information, and how new information is integrated with existing knowledge. My Visitor Studies Week blog is about some of my favorite methods.

Visitor studies utilize a number of familiar evaluation and research methodologies. I lean toward and enjoy qualitative methods such as focus groups, drawings, Personal Meaning Mapping, and timing and tracking. Recently, I have been conducting telephone focus groups, which have disadvantages but get the job done when working from a distance with limited travel funds. I have used drawings with elementary school children to document immediate impact, using a rubric created collaboratively with the program coordinator to score vocabulary and concept learning. Personal Meaning Mapping (PMM) is a concept-map-like methodology developed by John F. Falk, et al. (1998) to assess individual learning in informal settings across four dimensions: Extent, Breadth, Depth and Mastery. Originally designed for summative evaluation, PMM is very adaptable for prototyping, topic and label testing. Timing and tracking, the ultimate in observation in my opinion, is useful for understanding how the public uses or attends to entire exhibitions or individual exhibits within a larger exhibition. I have done timing and tracking studies for formative and summative evaluations in a natural history museum, reflective tracking in a front-end study of living history sites; and as part of strategic planning for reinterpretation of a historic cultural space to understand visitor pathways and engagement.

Rad Resources: Some favorite and time-tested resources include:

  • Krueger, R.A., and Casey, M. Focus Groups: A Practical Guide for Applied Research (3rd edition), Thousand Oaks, CA: Sage Publications, 2000.
  • Yalowitz, S., and Bronnekant, K. Timing and Tracking: Unlocking Visitor Behavior. Visitor Studies, Volume 12, Issue 1, January 2009, pages 47 – 64
  • Serrell, B. Paying Attention: Visitors and Museum Exhibitions, Washington: American Association of Museums, 1998.
  • Falk, J. H., Moussouri, T., & Coulson, D. (1998). The effect of visitors’ agendas on museum learning. Curator, 41(2), 107 – 120. A Google search for “personal meaning mapping” results in a number of studies using this methodology.
  • Informalscience.org a repository of research and evaluation reports, searchable by institution, methods, author, etc.
  • Institute for Learning Innovation has conducted many studies using all of these methods for over 20 years. Most studies are unpublished but may be available by request.
  • Science Museum of Minnesota, Research and Evaluation, The Big Back Yard Study Series (2006) includes a nice timing and tracking report.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. We are pleased to welcome colleagues from the Visitor Studies Association – many of whom are also members of AEA – as guest contributors this week. Look for contributions preceded by “VSA Week” here and on AEA’s weekly headlines and resources list.

· ·

Hello from Ithaca, NY. I’m Rick Bonney, director of program development and evaluation at the Cornell Lab of Ornithology. I also am on the board of the Visitor Studies Association, and I’m thrilled by several projects that the organization is developing with a multitude of partners who all conduct research and evaluation in the field of informal learning. All three of these projects are funded by the Informal Science Education program of the National Science Foundation (NSF).

Rad Resource: First, we’ve recently heard that we’re receiving another year of funding to continue our partnership with CAISE, the Center for Advancement of Informal Science (http://caise.insci.org/). Our expanding role in CAISE involves bridging the gap between visitor research and the practice of program development through workshops, reports, and online resources. For example, a recent article by Beverley Serrell, “Paying More Attention to Paying Attention,” provides an excellent overview of Tracking and Timing techniques (see http://caise.insci.org/resources/vsa-articles).

Rad Resource: Second, we’ve learned that we will be receiving funding for a project called “Building ISE through informalscience.org,” which will be conducted in partnership with the University of Pittsburgh’s Center for Out of School Learning (UPCLOSE) and the Science Museum of Minnesota. This ambitious project will facilitate the growth and use of informalscience.org by enhancing the site’s already useful databases and integrating with a broader set of other web based resources. The project will also conduct a synthesis of evaluation reports that will cover all available data across all sectors of informal science education.  The synthesis will also produce a framework for coding and organizing both current and future evaluation data. This effort will provide an opportunity for database mining for further research and program planning.  In addition, the grant will allow us to create a new section of the VSA website which will assist project developers in locating evaluators to partner in their work. Evaluators will be able to use the site to post profiles and examples of their work.

Rad Resource: Finally, VSA will be a major partner in a new project that has just been awarded to the Cornell Lab of Ornithology called DEVISE—Developing, Validating, and Implementing Situated Evaluation Instruments for Informal Science Education. Recently the ISE field has seen growing calls for new evaluation instruments, tools, and techniques that can be customized for use across ranges of similar projects. Such resources would bring common measures to evaluation practices, facilitate cross-project comparisons, and most importantly, provide evaluation guidance to project developers who are inexperienced or lack major resources for evaluation. VSA will play several roles in this project including hosting webinars for training people to use of the new tools and techniques.

This contribution is from the aea365 Tip-a-Day Alerts, by and for evaluators, from the American Evaluation Association. We are pleased to welcome colleagues from the Visitor Studies Association – many of whom are also members of AEA – as guest contributors this week. Look for contributions preceded by “VSA Week” here and on AEA’s weekly headlines and resources list.

· · ·

Archives

To top