AEA365 | A Tip-a-Day by and for Evaluators

Oct/15

28

STEM TIG Week: Daisy Rutstein, Eric Snow and Marie Bienkowski on Aligning Assessment Purpose and Use in Measuring Students’ Computational Thinking Practices

Hello, we are Eric Snow, Marie Bienkowski and Daisy Rutstein, from the Center for Technology in Learning at SRI Education. In our work in computer science education research and evaluation we are routinely asked to help clients implement assessments that support valid inferences about students’ computational thinking-oriented learning outcomes. We have learned many lessons from these experiences and would like to share some Lessons Learned and Hot Tips with the AEA STEM-CS community.

The new Exploring Computer Science (ECS) and Computer Science Principles (CSP) curricula are spreading throughout U.S. high schools via NSF-sponsored pilots and, combined with the advocacy efforts of organizations such as Code.Org, will continue their expansion. As these CS curricula continue to reach more schools and students, teachers implementing the instructional activities need high-quality assessments so they can make valid inferences about students’ computational thinking (CT) practices and better support student learning of those practices.

Lessons Learned: Assessments are used in different ways for different purposes. Assessment “use” means interpreting scores and acting on, or making inferences from, the interpretation. Some uses of assessments, each with their own purpose and supported inferences, are listed below.

Formative Use

  • Purpose: discerning student misconceptions and/or preparation for future learning.
  • Score interpretation: where a student is in his or her learning of particular concepts, pointing to instructional actions to improve learning or dislodge misconceptions.

Summative Use

  • Purpose: obtaining an overall score indicating whether or not students have grasped the important concepts taught.
  • Score interpretation: overall proficiency of the student.

Teacher Evaluation

  • Purpose: determining how effective a teacher is at teaching the material of interest.
  • Score interpretation: effectiveness of the teacher and his/her instruction.

Research or Project Evaluation

  • Purpose: determining the efficacy/effectiveness of one or more education interventions.
  • Score interpretation: differentiating students or teachers, or determining growth of teachers or students.

Our experience has taught us that the use of assessments and their results needs to be approached with caution because there may be negative consequences of using an assessment for a purpose for which it has not been validated.

Hot Tips: Evaluators can help clients ensure that the assessments they want to use are aligned with the purposes for which the assessments were designed and validated by:

  • Co-designing a clear logic model relating program inputs, processes and short- and long-term outcomes. This will help clarify the purposes of any assessments that need to be administered.
  • Helping clients recognize that assessments are not “plug-and-play” and help them obtain the resources they need to critically evaluate the appropriateness of existing assessments for their measurement needs.
  • Helping clients use assessment results in ways consistent with the intended purpose(s) of the assessment.

The American Evaluation Association is celebrating STEM Education and Training TIG Week with our colleagues in the STEM Education and Training Topical Interest Group. The contributions all this week to aea365 come from our STEM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

No tags

No comments yet.

Leave a Reply

<<

>>

Archives

To top