AEA365 | A Tip-a-Day by and for Evaluators

TAG | museum studies

Greetings, I am June Gothberg, Ph.D. from Western Michigan University, Chair of the Disabilities and Underrepresented Populations TIG and co-author of the Universal Design for Evaluation Checklist (4th ed.).   Historically, our TIG has been a ‘working’ TIG, working collaboratively with AEA and the field to build capacity for accessible and inclusive evaluation.  Several terms tend to describe our philosophy – inclusive, accessible, perceptible, voice, empowered, equitable, representative, to name a few.  As we end our week, I’d like to share major themes that have emerged over my three terms in TIG leadership.

Lessons Learned

  • Representation in evaluation should mirror representation in the program. Oftentimes, this can be overlooked in evaluation reports.  This is an example from a community housing evaluation.  The data overrepresented some groups and underrepresented others.

 HUD Participant Data Comparison

  • Avoid using TDMs.
    • T = tokenism or giving participants a voice in evaluation efforts but little to no choice about the subject, style of communication, or any say in the organization.
    • D = decoration or asking participants to take part in evaluation efforts with little to no explanation of the reason for their involvement or its use.
    • M = manipulation or manipulating participants to participate in evaluation efforts. One example was presented in 2010 where food stamp recipients were required to answer surveys or they were ineligible to continue receiving assistance.  The surveys included identifying information.
  • Don’t assume you know the backgrounds, cultures, abilities, and experiences of your stakeholders and participants. If you plan for all, all will benefit.
    • Embed the principals of Universal Design whenever and wherever possible.
    • Utilize trauma-informed practice.
  • Increase authentic participation, voice, recommendations, and decision-making by engaginge all types and levels of stakeholders in evaluation planning efforts. The IDEA Partnership depth of engagement framework for program planning and evaluation has been adopted in state government planning efforts across the United States.

 IDEA Partnership Leading by Convening Framework

  • Disaggregating data helps uncover and eliminate inequities. This example is data from Detroit Public Schools (DPS).  DPS is in the news often and cited as having dismal outcomes.  If we were to compare state data with DPS, does it really look dismal?2015-16 Graduation and Dropout Rates

 

Disaggregating by one level would uncover some inequities, but disaggregating by two levels shows areas that can and should be addressed.2015-16_Grad_DO_rate_DTW_M_F

 

 

We hope you’ve enjoyed this week of aea365 hosted by the DUP TIG.  We’d love to have you join us at AEA 2017 and throughout the year.

The American Evaluation Association is hosting the Disabilities and Underrepresented Populations TIG (DUP) Week. The contributions all week are focused on engaging DUP in your evaluation efforts. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Greetings! I’m Joe Heimlich, a Professor at The Ohio State University with Extension, the School of Environment & Natural Resources, and the Environmental Science Graduate Program. I’m also a Senior Research Associate with the Institute for Learning Innovation. This year, along with John Daws, I’m Program co-Chair for the AEA LGBT Issues TIG.

Hot Tip: One of the most ethical decisions we make as evaluators, is determining what we want to know about a person. Age, income, race, ethnicity, educational level…all carry with them ethical questions about use, need, confidentiality and more. Often, there are just reasons for deciding to ask LGBTQQI questions. (That unpronounceable acronym stands for lesbian, gay, bisexual, transgender, queer, questioning, and intersex. Often it’s shortened to LGBT.)

There are two major conditions shaping the decision to include – or to omit intentionally – questions on sexual or gender identity, and neither relates to LGBT politics:

  1. When such data would further our understanding of the effect or the impact of a program, treatment, or event. The rule of thumb I use is that if I am making assumptions using sex (Male or Female, as biological indicators) that involve gender role issues, then I need to include gender identity as a factor. This is especially true given research findings that in many situations, gay men are more like straight women in some decision and interaction processes. My favorite example is the question of dark versus milk chocolate preference. And yes, in groups straight men do eat milk more than dark chocolate, but the straight women and the gay men eat dark more than milk. Nature? Nurture? That we don’t know suggests to me that we should ask more and assume less.
  2. When asking for such data would benefit the individual and/or their engagement in the evaluation process. We all like to be included, and the chance to “see” oneself is important. In my recent interviews of same-sex couples about museum membership, one example kept coming up: The museums’ membership forms did not allow for gay and lesbian households to self-identify. The unfriendliest forms had two lines to enter names, labeled Male and Female. Other forms allowed for two names to be entered, but did not allow them to indicate what relationship existed between the two. If our evaluations are designed to allow people ‘s voices to be heard, there may be times when we need to let them know we want to hear their full voice, which means including all of who they are.

Rad Resource: A leading edge is the American Psychological Association’s Division 44, Society for the Psychological Study of Gay, Lesbian, Bisexual, and Transgender Issues. This is the group that first identified 11 genders.

The American Evaluation Association is celebrating LGBT Evaluation Week with our colleagues in the LGBT AEA Topical Interest Group. The contributions all this week to aea365 come from our LGBT members and you may wish to consider subscribing to our weekly headlines and resources list where we’ll be highlighting LGBT resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

· · · ·

Hello everyone. My name is Ioana Munteanu, and I am a Social Science Analyst with the Smithsonian Institution’s Office of Policy and Analysis. The Smithsonian consists of 19 Museums and the National Zoo; 9 Research Centers; and many other centers, programs and projects. Central to the Office of Policy and Analysis mission is to assist upper management at all levels in making sound decisions on improving the Institution’s exhibitions and programs for physical and virtual visitors and for stakeholders. Dr. Carole Neves directs our Office, which is composed of 12 skilled staff with diverse backgrounds, assisted by fellows and interns from both the United States and other countries.  Upon request we conduct formative, process and summative evaluations of both formal and informal programs and exhibitions that are offered on-site, off-site and online; the studies may be Institution-wide or focused on a particular Smithsonian unit, or department or program within a unit. The wonderful news is that over 100 of our studies are available online for FREE.  The link to our website is discussed below.

Rad Resource: Studies of visitors to the Smithsonian provide a glimpse into: who comes here for general museum visits and publicly available offerings, and answers why; how satisfied they are with their visit and what experiences they had; and what factors contributed to their satisfaction and experiences. These studies include formative assessments conducted during preparatory phases, as well as those looking at the output and impact of offerings. Staff employs a wide range of methodologies including, but not limited to: quantitative surveys—in person and online; qualitative interviewing; focus groups; observations; visitor tracking; and other methods such as card sorting or concept mapping.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · ·

Archives

To top