Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

CLEAR Week: Neha Sharma on What can behavioral science tell us about learning through evaluations?

I’m Neha Sharma from the CLEAR Global Hub at the World Bank’s Independent Evaluation Group. A key Hub role involves facilitating learning and sharing knowledge about evaluation capacity development. So I often think about how people learn. In this context, I’ve been reading a lot of behavioral science literature, and reflecting on what makes people learn to change behaviors.

Richard Thaler, University of Chicago Economist and Behavior Science Professor, recently wrote about how he changed his class’s grading scheme to minimize student complaints about “low” grades when he administered difficult tests (to get higher dispersion of grades to identify “star” students).  His trick was to change the denominator in the grading scheme from 100 to 137, meaning that the average student now scored in the 90s and not in the 70s. He achieved his desired results: high dispersion of grades and no student complaints about “low” grades!

Thaler’s blog made me wonder what effect this change in grading scheme had on student learning and the lessons it carried for communicating tough evaluation results. The relationship between performance and learning holds critical lessons for evaluators – does a 70 disguised as a 90 have an effect on learning?

Like classroom tests, evaluations that are seen as overly harsh or critical are often questioned and lessons are underused by the evaluated agency. This doesn’t mean that poor results should not be communicated – they absolutely should – but evaluators need to keep in mind that receiving and then learning from bad performance is not easy when there is a lot at stake – future funding, jobs, professional growth, and political stability. On the other hand, evaluations that reaffirm stakeholder-biases are futile too.

This balance between communicating actual performance and encouraging learning may be key to determining evaluation use. If evaluations are to fulfill their learning mission the “how to” learn is just as, if not more, relevant as the evaluation itself. Cognitive science research about behavior change could teach us a lot about how to encourage learning through evaluations. For instance, we see that easy is better than complicated, attractive is better than dull, and social is better rather than teaching in isolation when trying to change behaviors. Behavior science is an interesting field of study for evaluators – to help us demystify the relationship between evaluation performance and learning.

Rad Resources:

Thaler is one of many behavioral scientists (and psychologists, economists) writing about what influences our behavior. Here are more.

The American Evaluation Association is celebrating Centers for Learning on Evaluation and Results (CLEAR) week. The contributions all this week to aea365 come from members of CLEAR. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

3 thoughts on “CLEAR Week: Neha Sharma on What can behavioral science tell us about learning through evaluations?”

  1. Neha Sharma, I really enjoyed your thoughts drawing a parallel between students responding negatively to low scores and primary intended users not using findings when the evaluation is too critical. I am a high school teacher currently enrolled in a course on program inquiry and evaluation and have therefore been reading the AEA365 posts.

    To me, the underlying issue here seems to be emotions and motivation and how they affect our behavior (in this case taking action on feedback), but I would also like to tie in ownership and how it can assist in moving forward with information rather than ignoring or giving up entirely.

    One way to encourage positive emotions and motivation is to give ownership. Certainly this is a cornerstone of Patton’s 17 step utilization friendly framework wherein step 3 is to identify, organize, and engage primary intended users: the personal factor.
    Patton argues the following:
    “Intended users are more likely to use evaluations if they understand and feel ownership of the evaluation process and findings [and that] they are more likely to understand and feel ownership if they’ve been actively involved. By actively involving primary intended users, the evaluator is preparing the groundwork for use.” (Patton, 2008, Chapter 3, as referenced by better evaluation website).

    It seems that whether we are talking about ourselves, our students, or the programs that we are evaluating, ownership is imperative to improvement. In program evaluation, increased collaboration between intended users and evaluators helps foster sense of ownership to drive findings use, while in the classroom there is much evidence tying the benefit of ownership to increased positive emotions and better learning (literature review as cited in Stephens, 2015). Strategies for developing ownership range from how the room is set up and works that are displayed, to having choice on what type of assignment to complete in order to show skills and understanding, to having a say in what is assessed, partaking in the building of rubrics, and in some cases being part of the evaluation (Ontario Curriculum Science and Woolfolk Hoy, et al., 2016).

    I think that with increased ownership, when students or primary intended users are met with critique or some failure, they will see it more as a learning opportunity rather than to throw in the towel. When there is ownership, there is less likelihood of complaining or seeing results as too harsh as the primary intended users or students have been part of the process so there are no surprises, also intended users or students are probably more self aware, there is likely to have been process use take place or formative learning, and forward thinking to how change can happen is more likely to already be in place when the feedback is received.

    While I see how Thaler’s method of increasing the denominator to his tests (137 instead of 100) allows students to gain higher numbers of points in the numerator, and he feels these higher numbers help to motivate students, the fact remains that the percentage they get on the assignment is the same. When dealing with emotions and motivation, I think I prefer approaches like ownership to ones that are trying to hide the numbers, but still this topic certainly sheds light on the importance of the psychology of learning, and how we can motivate ourselves and others to take action on feedback whether it be on a test, an assignment, or a program evaluation.

    Thanks for the thought provoking post.

    Tamara

    The link for Patton’s 17 step utilization focused framework as outlined on the BetterEvaluation website: (http://www.betterevaluation.org/en/plan/approach/utilization_focused_evaluation)

    Link to the Ontario Curriculum- Science:
    http://www.edu.gov.on.ca/eng/curriculum/secondary/science.html

    References:
    Stephens, Tammy L. 2015. Encouraging Positive Student Engagement and Motivation: Tips for Teacher. Review360 Pearson. (https://www.pearsoned.com/encouraging-positive-student-engagement-and-motivation-tips-for-teachers/).
    Woolfock Hoy, A., Winne, Philip H., and Perry, N. 2016. Child and Adolescent Development and Learning(Pearson Custom Education chapter 7), Pearson Learning Solutions.

  2. Thank you Neha Sharma for such a thought provoking article.

    My name is Dan Siertsema, a graduate student with Queen’s University in Ontario, Canada. Currently I am working on a course concerning Program Inquiry and Evaluation and was tasked with finding an article of interest to respond to. Your article resonated with me and I wish to “think out loud” and perhaps establish a dialogue with you.

    The concept of evaluation, in this case as a summative mark, has always intrigued me as a teacher. Often I see the detriment caused to students when they witness a ‘poor’ mark after having worked hard and made progress. The evaluation conducted during process, focusing on skill building without a grade value and structured feedback helps students to improve learning and often persist. Of course this is also dependent on the relationship you’ve built with them. However, when students receive feedback that is emotional and negative in nature or a “bad” mark they often stop valuing the mark (the teacher is too hard or doesn’t like me) or they believe they are incapable of doing any better. This idea harkens back to Stanford University psychologist, Carol Dweck and her ideas about fixed and growth mindset. A lot of learning is about believing whether you can or cannot accomplish something. In this regard, Richard Thaler’s trick may actually help students establish a better attitude towards school and therefore be more apt to view learning positively or persist in schooling.

    The problem I see with changing marks, as you mention, is that it doesn’t accurately portray the skills and abilities of the student. In some ways this could set them up for failure later when there is even more at stake. For example, the job interview itself or the job that they find themselves in where they are qualified on paper and unqualified in reality. In some cases, this could even be dangerous. The other thing, I have noticed in my own personal experience, is many students are aware when they receive marks they have not earned. The result is that it undermines everyone’s marks and those students who worked very hard, feel like it wasn’t worth it. Essentially, establishing a viewpoint that does not value hard work.

    Another thing to consider: Is the poor evaluation received by the students the result of the student or the teacher? Was the measure reflective of the teaching? Often as educators we must self reflect and ask ourselves if the student outputs are appropriate to the inputs, the time and quality of instruction. If students aren’t achieving, why? Is it their mindset or ours?

    I accept the notion that “easy is better than complicated, attractive is better than dull, and social is better rather than teaching in isolation when trying to change behaviors”. What I wonder is do we adjust our instruction and evaluations to address this or do we try to change the cultural mindset to embrace challenge, see failure as learning and accept risk?

  3. Hi Neha,As I read through your thoughts about the effect poor grades (or seemingly poor grades) can have on a student’s cognition and their perspective of their school relationship I wondered if the proposed solution by Richard Thaler in a way undermine’s a person’s ability to fail (or do poorly) and learn from that. I understand his approach, I think, which is to minimize the impact of a seemingly low grade in order to remain committed to a course or its material. A “70 disguised as a 90” might well be a very effective way of keeping a student engaged. It becomes dangerous, in my perspective to start doctoring these numbers because of precisely the comment you made of it affecting funding, professional growth etc. I wouldn’t necessarily say that one would influence the other. In fact I hope not. I hope these decisions are based on a larger quality and criteria based assessment but I worry that data can influence unfairly some of these decisions. Thank you for reading my comment!
    Dan Baboolal

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.