Hello! My name is Tina Phillips and I am the evaluation program manager at the Cornell Lab of Ornithology. I lead an NSF-funded project called DEVISE (Developing, Validating and Implementing Situated Evaluations), which is aimed at providing practitioners and evaluators with tools to assess individual learning outcomes from citizen science, or public participation in scientific research (PPSR) projects. Within the context of citizen science, we intend to test and validate a suite of instruments across different projects and assess how they perform in different settings. The first thing we did was to assess the state of citizen science evaluations, which formed the basis for a draft framework for assessing learning outcomes. This framework includes six major constructs which comprise common outcomes across diverse projects and include: Interest in science, motivation to participate, knowledge of the nature of science, skills of science inquiry, environmental stewardship behaviors, and science identity.
Lessons Learned: Developing and validating scales is hard! If you’ve done this before, you know what I mean. If you haven’t, don’t underestimate the amount of time it will take to do this well. For instance, prior to developing scales, we conducted an extensive inventory of existing scales that were aligned to our framework and relevant to STEM (Science, engineering, mathematics, and technology) and informal science learning environments. Gathering these scales and the associated literature to document their psychometric properties was labor intensive. Next, as a team, we reviewed and rated each of these scales to determine their contextual relevance to citizen science. From there, we devised a plan for testing or modifying an existing scale, or developing a brand new instrument. For example, one scale is being developed using concept mapping, another is being developed from existing scales; and another is being developed as an item data bank. Once these scales are drafted, they still need to be tested with a variety of audiences and contexts to meet satisfactory validity and reliability criteria.
Hot Tip: Seek the help of psychometricians and others who have developed valid and reliable scales.
Rad resource: Once finalized, the DEVISE toolkit will be openly available via the Citizen Science Toolkit website. This dynamic site is geared towards citizen science practitioners and provides featured projects and a host of resources for working within the citizen science arena.
Rad resource: Another great resource is the Assessment Tools for Informal Science (ATIS) website. The site offers detailed information for over 60 instruments categorized by age, domain, and assessment type. They are currently seeking reviews of instruments by end users.
The American Evaluation Association is celebrating Environmental Program Evaluation Week with our colleagues in AEA’s Environmental Program Evaluation Topical Interest Group. The contributions all this week to aea365 come from our EPE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.