Hi! My name is Taylor Martin and I’m a Professor in Instructional Technology and Learning Sciences at Utah State University. I also run the Active Learning Lab (activelearninglab.org) where I work with a great bunch of colleagues and students researching and evaluating active and fun learning experiences for kids of all ages around programming, computational thinking, mathematics and engineering.
We work with kids and teachers learning in MakerSpaces and FabLabs (http://makermedia.com) and in day camps and schools programming in visual programming languages like Scratch (https://scratch.mit.edu). In both, instructors and teachers see people creating cool and complex products that they want to create, whether it’s a 3D printed geometric shape, miniature animal or person, or really anything they dream up, or a Scratch animation or game. What’s harder to see is the complex computational thinking that goes into making these objects. As teachers and instructors, we also see how excited and engrossed kids often are in these activities, and in general, we think that should make these really promising environments for learning.
Hot Tip: We can often evaluate students’ level or type of engagement better when we measure it without interrupting their activity. Think about it, how excited about programming a game in Scratch would you be if I kept asking you every five minutes, “On a scale of 1-10, how engaged are you right now?” People like Ryan Baker, Sidney D’Mello and others have been using machine learning to build detectors for states of engagement like concentration, boredom, or frustration to avoid this issue.
Rad Resource: For evaluating learning, people like Val Shute, Matthew Berland, and Marcelo Worsley have been creating novel ways to figure out what people know at any given time based on what they are doing. One example is doing machine learning and data mining on the backend data produced while a kid plays a game like Physics Playground. People are also starting to create generalized tools that can plug into games while they’re being developed that have built in data capture and analysis tools for games built on different platforms, such as adageapi.org. Another example is collecting sensor data and pulling it together to figure out what people are doing in a variety of environments like MakerSpaces.
I’d love to hear back from others with the resources they’ve developed and discovered in this space.
The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.