Arts, Culture, and Museums TIG Week: Awareness of Cognitive Biases in Organizational and Policy Evaluation in the Arts by Rachael Jenison

Hi, I’m Rachael Jenison (MFA/MPA) and am currently serving as an AmeriCorps Member at the New York City Department of Cultural Affairs, a mayoral agency that oversees the distribution of city funds to arts and culture organizations. My focus there is on evaluation of equity-based reforms to the grant-making process. Prior to serving at DCLA, I worked as a professional actor and was co-founder and artistic director of a small theater company.

When I was running a small theater company, our evaluation was rough and ready. We lacked capacity to do any formal data collection and analysis and relied instead on rough numbers and reflection when evaluating our programming and audience engagement and outreach — par for the course for an organization at our budget level of less than $25,000 annually. These estimates allowed us to advocate for funding and make an argument for the value of the art we were creating, but we were running on instinct. This lack of capacity to do more formal analysis also inhibited our potential for growth and stability, an issue that many small organizations face.

Lessons Learned

One published goal of DCLA’s reforms is to “provide stability for grantees, especially smaller organizations.” I know how impactful those changes such as increased minimum awards and multi-year funding commitments can be, and on a larger societal scale how imperative it is that we endeavor through policy to “enhance services for [those] that have experienced historical and/or systematic financial and social inequities.” While there is substantially more capacity and accountability for formal evaluation at the policy level than at that of small organization, judgement calls are still required when evaluating the efficacy of a policy change or a change in resources or materials. For example, when evaluating something like distance from the mean, as it relates to a particular change, there is some judgement required in translating the numerical value of the distance to the conceptual size of the distance to a recommendation; the same is true with frequency and determining, for instance, what frequency of qualitative survey response constitutes a high versus low frequency and how or if that response should impact processes going forward. When aiming to reduce as much as possible the barriers to applying for eligible organizations, these calls can feel weighty. As I have been learning how to evaluate in this more formal context, I’m consistently reminded of the importance of being thorough and offering recommendations that are as closely aligned with the data as possible.

Rad Resource

One rad (and retro) resource I found that has helped me think about how my own cognitive biases may impact my interpretation, particularly when evaluating large quantities of data is “Judgement under Uncertainty: Heuristics and Biases” by Amos Tversky and Daniel Kahneman. This paper, originally published in Science in 1974, puts forth that there are three heuristic principles — representativeness, availability, and adjustment and anchoring — by which people “reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations” (p.1).  These heuristics help us understand the world around us but also have pitfalls. For example, when recalling survey data, a particular qualitative response may seem more frequently occurring if it is in alignment with an evaluator’s opinion and therefore more easily recalled, an example of the availability heuristic. Grounding in the data and following its lead (completing a thorough analysis), while staying cognizant of these cognitive pitfalls, is vital at any level, either that of the organization or policy, and can help the arts continue to argue for their value and move forward toward equity.

The American Evaluation Association is hosting Arts, Culture, and Museums (ACM) TIG Week. The contributions all week come from ACA TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.