I am Holly Kipp, Researcher, from The Oregon Community Foundation (OCF). Today’s post shares some of what we’re learning through our efforts to measure social-emotional learning (SEL) in youth in the context of our K-12 Student Success Initiative.
The Initiative, funded in partnership with The Ford Family Foundation, aims to help close the achievement gap among students in Oregon by supporting expansion and improvement of out-of-school time programs for middle school students.
Through our evaluation of the Initiative, we are collecting information about program design and improvement, students and their participation, and student and parent perspectives. One of our key data sources is a survey of students about their social-emotional learning (SEL).
Rad Resources: There are a number of places where you can learn more about SEL and its measurement. Some key resources include:
- The Collaborative for Academic Social and Emotional Learning, or CASEL
- The University of Chicago Consortium on School Research, in particular their Students & Learning page
In selecting a survey tool, we wanted to ensure the information collected would be useful both for our evaluation and for our grantees. By engaging grantee staff in our process of tool selection, they had a direct stake in the process and would hopefully buy-in to using the tool we chose – not only for our evaluation efforts but for their ongoing program improvement processes.
Hot Tip: Engage grantee staff directly in vetting and adapting a tool.
We first mined grantee logic models for their outcomes of interest, reviewed survey tools already in use by grantees, and talked with grantees about what they wanted and needed to learn. We then talked with grantees about the frameworks and tools we were exploring in order to get their feedback.
We ultimately selected and adapted The Youth Skills and Beliefs Survey developed by the Youth Development Executives of King County (YDEKC) with support from American Institutes for Research.
Rad Resource: YDEKC has made available lots of information about their survey, the constructs it measures, and how they developed the tool.
Rad Resource: There are several other well-established tools worth exploring, such as the DESSA (or DESSA-mini) and DAP and related surveys, especially if cost is not a critical factor.
Hot Tip: Student surveys aren’t the only way to measure SEL! Consider more qualitative and participatory approaches to understanding student social-emotional learning.
Student surveys are only one approach to measuring SEL. We are also working with our grantees to engage students in photo voice projects that explore concepts of identity and belonging – elements that are more challenging to measure well with a survey.
Rad Resource: AEA’s Youth Focused TIG is a great resource for youth focused and participatory methods.
The American Evaluation Association is celebrating Oregon Community Foundation (OCF) week. The contributions all this week to aea365 come from OCF team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
1 thought on “OCF Week: Holly Kipp on Measuring social-emotional learning in middle schoolers”
Thank you for posting on SEL in youth. I am a teacher who is currently studying program evaluation. The division I work for has just developed a framework around SEL so I am fairly new to the concepts around it however I do find the area very fascinating and challenging at the same time.Since I am new to program evaluation, I just had a couple of questions around the data collection you used or have used. You talked about using photovoice projects, which I have never heard about but looks like a great tool. Do you come across privacy issues with the photos? Who interprets the photos after to ensure the intended meaning of the photos are being met from the photographer? Do you encounter alot of data collection methods when doing program evaluations that require the evaluator to interpret the data? What types of biases can be encountered?
Thanks for sharing your perspective on this work.