STEM TIG Week: Taylor Martin on Evaluating Learning and Engagement in Active Learning Environments

Hi! My name is Taylor Martin and I’m a Professor in Instructional Technology and Learning Sciences at Utah State University. I also run the Active Learning Lab (activelearninglab.org) where I work with a great bunch of colleagues and students researching and evaluating active and fun learning experiences for kids of all ages around programming, computational thinking, mathematics and engineering.

We work with kids and teachers learning in MakerSpaces and FabLabs (http://makermedia.com) and in day camps and schools programming in visual programming languages like Scratch (https://scratch.mit.edu). In both, instructors and teachers see people creating cool and complex products that they want to create, whether it’s a 3D printed geometric shape, miniature animal or person, or really anything they dream up, or a Scratch animation or game. What’s harder to see is the complex computational thinking that goes into making these objects. As teachers and instructors, we also see how excited and engrossed kids often are in these activities, and in general, we think that should make these really promising environments for learning.

Hot Tip: We can often evaluate students’ level or type of engagement better when we measure it without interrupting their activity. Think about it, how excited about programming a game in Scratch would you be if I kept asking you every five minutes, “On a scale of 1-10, how engaged are you right now?” People like Ryan Baker, Sidney D’Mello and others have been using machine learning to build detectors for states of engagement like concentration, boredom, or frustration to avoid this issue.

Rad Resource: For evaluating learning, people like Val Shute, Matthew Berland, and Marcelo Worsley have been creating novel ways to figure out what people know at any given time based on what they are doing. One example is doing machine learning and data mining on the backend data produced while a kid plays a game like Physics Playground. People are also starting to create generalized tools that can plug into games while they’re being developed that have built in data capture and analysis tools for games built on different platforms, such as adageapi.org. Another example is collecting sensor data and pulling it together to figure out what people are doing in a variety of environments like MakerSpaces.

I’d love to hear back from others with the resources they’ve developed and discovered in this space.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

2 thoughts on “STEM TIG Week: Taylor Martin on Evaluating Learning and Engagement in Active Learning Environments”

  1. Hello Taylor,
    As a current teacher who is consistently providing active learning environments for my students on a daily basis, I have found that assessing assignments in regards to Scratch can become very tricky! I like that you mentioned if we were to ask students consistently every 5 minutes if they are still engaged on a scale of 1-10, that would most likely be disengaging. Usually, I have students play around with the program for 5 minutes just so that they can get their computer jitters out and then I will start their assignments. I also ensure to have a rubric for whatever assignment I would like them to complete provided to them and create success criteria with them. The reason I do this is that I can then float around the room and ask them where they are in regards to the success criteria. This helps me to assess how well they are completing the assignment as well as how engaged they are.
    I have yet to find a tool that can run data analysis on the process of the student’s work while using Scratch, but I can usually see how complex they are making their assignments while I float around the room and check on them. One can also see the complexity of a student’s assignment once it is completed and handed in online to me!
    Thank you for your article, as I will now look into a data analysis tool to better assess computer programming assignments!
    Cheers,
    Mandy

  2. Thank you for this helpful advice! Unfortunately I have a habit of putting everything into the important and urgent category. Your list will help me stop and think about what I’m doing and how to prioritize it. Again thank you for sharing this one.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.