Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

ACM TIG Week: Collaborating with Practitioners to Create Relevant Instruments to Measure Library Staff Capacity for Informal Learning Activities by Patricia Montaño, Megan Littrell, Anne Gold, Claire Ratcliffe Adams, & Brooks Mitchell

We are Patricia Montaño, Megan Littrell, and Anne Gold from CIRES Education & Outreach at the University of Colorado, Boulder, and Claire Ratcliffe Adams and Brooks Mitchell from the National Center for Interactive Learning (NCIL) at the Space Science Institute in Boulder, Colorado. We collaborate on the We are Water project to design informal Science, Technology, Engineering, Art, and Math (STEAM) experiences for patrons of tribal and rural public libraries in the Four Corners region. Merging the expertise of practitioners, researchers and evaluators ensures our work is  informed by and responsive to the contexts we are studying.

A collaborative process includes making sure the development of research and evaluation instruments is aligned with informal education design and implementation. To achieve this, we created a partnership between practitioners, trainers, and educational researchers seen in these three groups: 1) Library staff who host the We are Water exhibition and facilitate STEAM activities during hosting, 2) the National Center for Interactive Learning (NCIL) who train library staff in STEAM activities and provide resources and support through the STAR Library Network, and 3) CIRES Education & Outreach who designs the educational research and evaluation.

By bringing practitioners, trainers, researchers, and evaluators together from the start, and through each step of the project, we were able to incorporate multiple perspectives into the development of instruments designed to measure library staff capacity to engage patrons in STEAM. If one group had not been at the table, the development of instruments would not have been as valid and reflective of the participants nor as well tested.

Through conversations, we realized that we had slightly different understandings about the context of rural library settings, and context is everything. Hearing from and being advised by rural library staff quickly corrected misunderstandings. For example, rural libraries are super-stars at serving large geographic areas with a small staff ranging from 2 to 4 people, and each staff member is often responsible for all library functions. This is different from larger libraries, where responsibilities are divided among more staff members. Moreover, the pandemic put unique stressors onto rural library work. Library staff were expected to provide public and social services beyond their usual job descriptions. Understanding the context of rural libraries and the experiences of their staff was key in creating relevant training modules and survey questions. Library staff provided us with pivotal insights that affected training formats and ensured greater participation in research and evaluation.

We also learned from library staff that we had different definitions about what STEAM programs looked like and what “counted” as a program. We broadened our definition with the more inclusive phrase, “learning experiences.” In doing so, we could encompass all the efforts of library staff, such as topical book displays and StoryTime.

Lessons Learned

Even if you have experience with a community organization like a library, never assume your experience gives the full picture. Always ask questions, always listen, and expand your understanding.

In a group with different perspectives, it’s unlikely everyone has the same definitions for words and phrases you commonly use in your jobs. It’s always a good practice to ask everyone to share, describe, and give examples to illustrate what those words and phrases mean.

Rad Resources

  • Library Research Service has information and tools on research and evaluation in libraries.
  • STARNet has a wealth of STEM resources for library staff.
  • InformalScience.org has free evaluation and research reports on informal STEM education in libraries and on library staff capacity.

We honor and acknowledge that the University of Colorado’s four campuses are on the traditional territories and ancestral homelands of the Cheyenne, Arapaho, Ute, Apache, Comanche, Kiowa, Lakota, Pueblo and Shoshone Nations.


The American Evaluation Association is celebrating Arts, Culture, and Museums (ACM) TIG Week. The contributions all week come from ACM members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

1 thought on “ACM TIG Week: Collaborating with Practitioners to Create Relevant Instruments to Measure Library Staff Capacity for Informal Learning Activities by Patricia Montaño, Megan Littrell, Anne Gold, Claire Ratcliffe Adams, & Brooks Mitchell”

  1. Hi there,

    I thoroughly enjoyed your post regarding incorporating collaborative methods into evaluations. I agree that the importance of collaboration with key groups like staff, researchers, and evaluators, at the beginning and throughout the process of evaluation. Having this interaction in the early stages will help the development of the program evaluation and evaluation strategies. 

    It is important to understand the variability that can exist in the interpretations of evaluation findings. As you mentioned, we all hold different experiences that play a factor to how we perceive information. “[Findings] can be construed differently by each person, and one person’s perspective is as valid as any others” (Weiss, 1998, p.29). Each collaborator possesses different experiences and knowledge which can provide distinctive perspectives to a problem or findings. The various perspectives can provide a clear picture and insight to the program. Even creating definitions like learning experiences, can improve how we assess the program. With different individuals involved, it also reduces the influences and biases that exist within our data collection and conclusions (Scriven, 1996 as cited in Shulha & Cousins, 1997; Weiss, 1998). This will help remove the implicit or indirect influences that are not visible for us to see. 

    In addition to the contextual aspects and biases, there is an engagement in higher-level thinking through reflections and critical thinking that is present in collaborations. As Weiss (1998) suggests, “[c]ollaborative evaluation has the side benefit of helping program people reflect on their practice, think critically, and ask questions about why the program operates as it does” (p.25). Numerous studies have exemplified these benefits, which are great practices for any form of evaluation and learning (Ayers, 1987; Cousins, 1995; Cousins & Earl, 1995; Greene, 1988 as cited in Shulha & Cousins, 1997, p.200; Weiss, 1998). As individuals, collaboration can contribute to our metacognition for self-regulation. Peña-Ayala (2015) further explains, “tools and teams provide a supportive context that facilitates intentional self-regulation and metacognitive control of key learning processes” (p.80). Incorporating key groups of collaborators can provide the opportunities to develop metacognition and critical analysis of the program. 

    As a last note, I think Shulha & Cousins (1997) summarizes collaboration the best, “evaluators and program practitioners need to further open the dialogue to better understand the ambiguities of evaluation practice” (p.204). Not only are we able to analyze and reflect on the program, we are also given the opportunity to limit fixated perspectives and underlying biases. With the engagement of others, we can use the shared information to expand our understanding of the programs and the most fitting methods of evaluation. 

    Thank you for sharing such an insightful post!

    Best, 
    Sherry

    References: 

    Peña-Ayala, A. (2015). Metacognition: fundaments, applications, and trends (p. 362). New York: Springer.

    Shulha, L. M., & Cousins, J. B. (1997). Evaluation Use: Theory, Research, and Practice since 1986. Evaluation Practice, 18(3), 195–208. https://doi.org/10.1177/109821409701800302

    Weiss, C. H. (1998). Have We Learned Anything New About the Use of Evaluation? American Journal of Evaluation, 19(1), 21–33. https://doi.org/10.1177/109821409801900103

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.