Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

STEM TIG WEEK: Share and Share Alike: Challenges and Lessons Learned from Using Embedded Assessments as Shared Measures by Veronica Del Bianco, Cathlyn (Cat) Davis Stylinski, Rachel Becker-Klein, Amy Grack Nelson, Karen Peterman and Jenna Linhart

Hello – we are a team of evaluators and researchers who work together on a project called Streamlinine Embedded Assessments (NSF #1713434), and we want to share a bit about our efforts to develop embedded assessments as a form of shared measures that can serve the needs and context within citizen science and other forms of science learning. Our team brings extensive experience in research and evaluation, and is based in non-profits, universities and museums: Rachel Becker-Klein, Veronica Del Bianco, Amy Grack Nelson, Jenna Linhart, Karen Peterman, Tina Phillips, Cathlyn (Cat) Davis Stylinski, and Andrea Wiggins. We are partnering directly with practitioners from 10 citizen science projects across the U.S. on this exploratory research. 

First, let’s clarify some terms.

  • Shared measures are grounded in theory and developed with psychometrics to support their use across contexts. They should be broadly applicable.
  • Embedded assessments (EAs) are activities that are integrated into the learning experience, allowing learners to demonstrate their competencies. They should be performance-based, authentic, and embedded in program activities (of course!).
  • Citizen science has participants apply science inquiry skills to contribute to research endeavors. 

Previous research demonstrates that formalized skill assessment of any type is uncommon in citizen science. EAs offer tremendous potential. So, the challenge is how do we create shared measures of volunteer skills that meet our criteria: (1) broadly applicable to multiple projects, (2) indistinguishable from the learning experience itself, (3) performance-based with learners demonstrating their proficiency, and (4) authentic to the learning experience.

Lessons Learned

As we work with our partnering citizen science project leaders, we are finding that there are tensions among the four criteria, and it may not be possible to fulfill all criteria equally. Embedded and authentic tend to complement each other. However, the more embedded an assessment becomes the less broadly applicable it would likely be to other settings. 

For instance, we created a simulation video of a volunteer collecting data in the field. It was performance-based because it had volunteers demonstrate competencies, not rate them, but watching a video lacks the authenticity of actually being outside collecting the data themselves. 

Rad Resources: 

Here are some resources that we are using as we continue our work on creating shared measures:


The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the Science, Technology, Engineering, and Mathematics Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM Education and Training TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.