AEA365 | A Tip-a-Day by and for Evaluators

TAG | survey design

Hi, we are James W. Altschuld, Hsin-Ling (Sonya) Hung, and Yi-Fang Lee from The Ohio State University, Virginia Commonwealth University, and National Taiwan Normal University, and have presented on, written about, and been involved in NAs for many years.  In reviewing manuscripts for publication, we have observed measurement and analysis issues in double scaled surveys used for identifying discrepancies between what should be (importance) and what is (current) conditions.  They include wording, scaling choices, organization, misleading conclusions and interpretations, and others.

If the survey is not sound what comes from it will be the fruit of the poisonous tree.  By offering ideas for improving NA surveys some (not all) of the weaknesses can be attended to and the quality of needs work will be enhanced.

Hot Tips:

  1. Read sources about the design and especially strengths and weaknesses in NA surveys (some sources do both, see Rad Resources).
  2. Search for sample NA surveys in the area of concern, examine/critique for strengths and weaknesses as noted in point 1.
  3. See if other techniques have been used with the surveys. (Having multiple sources of information is good practice.)
  4. Conduct pre-interviews (focus group, individual) with a few respondents regarding their thoughts and the language they use for the area of concern. (Will make the instrument more meaningful.)
  5. Cluster items into sections and consider having respondents rank them after completing the survey. (Not all clusters will be equal of value.)
  6. Employ options like don’t know (DN), no information (NI) upon which to decide, not applicable (NA), etc.
  7. Forcing choices without such options, as in point 6, may produce misleading data and additionally the options provide useful information.
  8. Have an undecided (neutral) response on the scale. (Similar rationale to point 6.)
  9. Consider alternatives such as magnitude estimation scaling (MES), fuzzy scales, rank ordering approaches, etc. (Let’s be expansive and innovative in what we do.)
  10. Multiple ways exist for analyzing data ranging from simple/weighted needs indexes, means difference analysis, proportional reduction in error (PRE), etc. (Try several, see if results differ and conclusions are affected.)

Lessons Learned:

  1. Seemingly simple double scaled surveys are not so seemingly simple.
  2. ‘There are 95 rules of survey design and after the 95th there are 95 more you don’t know about.’ This applies doubly to double scaled NA surveys and to illustrate the assertion, note that we haven’t touched the gnarly topic of “how to word” items.

Rad Resources:

Altschuld, J. W. (2010). Needs Assessment Phase II: Collection data, Chapter 3: That Pesky Needs Assessment Survey. pp. 35-57. Thousand Oaks, CA: Sage Publications

White, J. L. & Altschuld, J. W. (2012). Understanding the “what should be condition in needs assessment data.” Evaluation and Program Planning, 35(1), 124-132.

The American Evaluation Association is celebrating Needs Assessment (NA) TIG Week with our colleagues in the Needs Assessment Topical Interest Group. The contributions all this week to aea365 come from our NA TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

I am Carla Hillerns from the Office of Survey Research at the University of Massachusetts Medical School’s Center for Health Policy and Research. In 2015, my colleague and I shared a post about how to avoid using double-barreled survey questions. Today I’d like to tackle another pesky survey design problem – leading questions. Just as we don’t want lawyers asking leading questions during a direct examination of a witness, it’s also important to avoid leading survey respondents.

By definition, a leading question “guides” the respondent towards a particular answer. Poorly designed questions can create bias since they may generate answers that do not reflect the respondent’s true perspective. Using neutral phrasing helps uncover accurate information. Here are a few examples of leading questions as well as more neutral alternatives.

Leading questions alternative wording, reason for change table

Hot Tips for Avoiding Leading Questions:

  1. Before deciding to create a survey, ask yourself what you (or the survey sponsor) are trying to accomplish through the research. Are you hoping that respondents will answer a certain way, which will support a particular argument or decision? Exploring the underlying goals of the survey may help you expose potential biases.
  2. Ask colleagues to review a working draft of the survey to identify leading questions. As noted above, you may be too close to the subject matter and introduce your opinions through the question wording. A colleague’s “fresh set of eyes” can be an effective way to tease out poorly phrased questions.
  3. Test the survey. Using cognitive interviews is another way to detect leading questions. This type of interview allows researchers to view the question from the perspective of the respondent (see this AEA365 post for more information).

Rad Resource: My go-to resource for tips on writing good questions continues to be Internet, phone, mail, and mixed-mode surveys: The tailored design method by Dillman, Smith & Christian.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

This is Heather Esper, senior program manager, and Yaquta Fatehi, senior research associate, from the Performance Measurement Initiative at the William Davidson Institute at the University of Michigan. Our team specializes in performance measurement to improve organizations’ effectiveness, scalability, and sustainability and to create more value for their stakeholders in emerging economies.

Our contribution to social impact measurement (SIM) focuses on assessing poverty outcomes in a multi-dimensional manner. But what do we mean by multi-dimensional? For us, this refers to three things. It first means speaking to all local stakeholders when assessing change by a program or market-based approach in the community. This includes not only stakeholders that interact directly with the organization, such as customers or distributors from low-income households, but also those that do not engage with the venture  ?  like farmers who do not sell their product to the venture, or non-customers. Second, this requires moving beyond measuring only economic outcome indicators; it includes studying changes in capability and relationship well-being of local stakeholders. Capability refers to constructs such as the individual’s health, agency, self-efficacy, and self-esteem. Relationship well-being refers to changes in the individual’s role in the family and community and also in the quality of the local physical environment. Thirdly, multi-dimensional outcomes means assessing positive as well as negative changes on stakeholders and on the local physical and cultural environment.

We believe assessing multidimensional outcomes better informs internal decision-making. For example, we conducted an impact assessment with a last-mile distribution venture and focused on understanding the relationship between business and social outcomes. We found a relationship between self-efficacy and sales, and self-efficacy and turnover, meaning if the venture followed our recommendation to improve sellers’ self-efficacy through trainings, they would also likely see an increase in sales and retention.

Rad Resources:

  1. Webinar with the Grameen Foundation on the value of capturing multi-dimensional poverty outcomes
  2. Webinar with SolarAid on qualitative methods to capture multi-dimensional poverty outcomes
  3. Webinar with Danone Ecosystem Fund on quantitative methods to capture multi-dimensional poverty outcomes

Hot Tips:  Key survey development best practices:

  1. Start with existing questions developed and tested by other researchers when possible and modify as necessary with a pretest.
  2. Pretest using cognitive interviewing methodology to ensure a context-specific survey and informed consent. We tend to use a sample size of at least 12.
  3. For all relevant questions, test reliability and variability using the data gathered from the pilot. We tend to use a sample size of at least 25 to conduct analysis, such as Cronbach’s alpha of multi-item scale questions).

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Greetings AEA colleagues. We are Carla Hillerns and Pei-Pei Lei – survey enthusiasts in the Office of Survey Research at the University of Massachusetts Medical School. In 2014, we shared a post about effective email subject lines for internet survey invitations. Today we’d like to focus on the body of the email. Here are strategies for writing email invitations that motivate recipients to participate in your survey.

Hot Tips:

  • Personalize the salutation. Whenever possible, begin the invitation with the recipient’s name, such as “Dear Carla Hillerns” or “Dear Ms. Lei.” Personalization helps people know that they’re the intended recipient of the invitation.
  • Do not bury the lead. Use the first line or two of the email to invite the recipient to take the survey. Some people might open your email on mobile devices, which have significantly smaller screen sizes than most computers.
  • Include the essentials. A survey invitation should accomplish the following:
    • Explain why the individual is chosen for the survey
    • Request participation in the survey
    • Explain why participation is important
    • Provide clear instructions for accessing the survey
    • Address key concerns, such as confidentiality, and provide a way for recipients to ask questions about the survey, such as a telephone number and email address
    • Express appreciation
    • Include sender information that conveys the survey’s legitimacy and significance
  • Less is more. The most frequent problem we’ve seen is an overly wordy invitation. Follow the modified KISS principle – Keep It Short and Simple. Common issues that complicate invitations are:
    • Overlong sentences
    • Redundant points
    • Extra background details
    • Cryptic wording, such as acronyms and technical jargon
    • Intricate instructions for accessing and/or completing the survey

Cool Trick:

  • Pre-notify, if appropriate. Examples of pre-notifications include an advance letter from a key sponsor or an announcement at a meeting. Pre-notification can be a great way to relay compelling information about the survey so that the email invitation can focus on its purpose.

Lesson Learned:

Rad Resources:

  • Emily Lauer and Courtney Dutra’s AEA365 post on using Plain Language offers useful tips that can be applied to all aspects of survey design and implementation, including the initial invitation email, any reminders emails, and the survey itself.
  • Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method, 4th Edition by Don A. Dillman, Jolene D. Smyth, and Leah Melani Christian provides lots of helpful guidance for crafting invitations and implementing internet surveys.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

I am Holly Kipp, Researcher, from The Oregon Community Foundation (OCF). Today’s post shares some of what we’re learning through our efforts to measure social-emotional learning (SEL) in youth in the context of our K-12 Student Success Initiative.

The Initiative, funded in partnership with The Ford Family Foundation, aims to help close the achievement gap among students in Oregon by supporting expansion and improvement of out-of-school time programs for middle school students.

Through our evaluation of the Initiative, we are collecting information about program design and improvement, students and their participation, and student and parent perspectives. One of our key data sources is a survey of students about their social-emotional learning (SEL).

Rad Resources: There are a number of places where you can learn more about SEL and its measurement. Some key resources include:

  • The Collaborative for Academic Social and Emotional Learning, or CASEL
  • The University of Chicago Consortium on School Research, in particular their Students & Learning page

In selecting a survey tool, we wanted to ensure the information collected would be useful both for our evaluation and for our grantees. By engaging grantee staff in our process of tool selection, they had a direct stake in the process and would hopefully buy-in to using the tool we chose – not only for our evaluation efforts but for their ongoing program improvement processes. 

Hot Tip: Engage grantee staff directly in vetting and adapting a tool.

We first mined grantee logic models for their outcomes of interest, reviewed survey tools already in use by grantees, and talked with grantees about what they wanted and needed to learn. We then talked with grantees about the frameworks and tools we were exploring in order to get their feedback.

We ultimately selected and adapted The Youth Skills and Beliefs Survey developed by the Youth Development Executives of King County (YDEKC) with support from American Institutes for Research.

Rad Resource: YDEKC has made available lots of information about their survey, the constructs it measures, and how they developed the tool.

Rad Resource: There are several other well-established tools worth exploring, such as the DESSA (or DESSA-mini) and DAP and related surveys, especially if cost is not a critical factor.

Hot Tip: Student surveys aren’t the only way to measure SEL! Consider more qualitative and participatory approaches to understanding student social-emotional learning.

Student surveys are only one approach to measuring SEL. We are also working with our grantees to engage students in photo voice projects that explore concepts of identity and belonging – elements that are more challenging to measure well with a survey.

Rad Resource: AEA’s Youth Focused TIG is a great resource for youth focused and participatory methods.

The American Evaluation Association is celebrating Oregon Community Foundation (OCF) week. The contributions all this week to aea365 come from OCF team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

We are Caitlin Ruffenach, Researcher, and Kim Leonard, Senior Evaluation Officer, from The Oregon Community Foundation (OCF). Among other things, we are working on an evaluation of the Studio to School Initiative at OCF, which focuses on the development of sustainable arts education programs through partnerships between arts organizations and schools.

This past summer, in collaboration with the Oregon Arts Commission, we conducted a survey of arts organizations in Oregon in an effort to learn about the arts education programming they provide, often in concert with what is available more directly through the school system.

The purpose of this survey was to help the Foundation understand how the grantees of its Studio to School Initiative fit into the broader arts education landscape in Oregon. We hope the survey results will also serve as a resource for grantees, funders, and other stakeholders to understand and identify programs delivering arts education throughout the state.

Lesson Learned: To ensure we would have the most useful information possible, our survey design process included several noteworthy steps:

  1. We started with existing data; by gathering information about organizations who had received funding in arts education in Oregon in the past we were able to target our efforts to recruit respondents.
  2. We consulted with others who have done similar surveys to learn from their successes and challenges;
  3. We paid close attention to survey question wording to ensure that we were focusing as tightly on what was measurable by survey as possible; and
  4. We vetted our early findings with arts education stakeholders.

Hot Tip: A collaborative, inclusive survey design process can result in better survey tools. We used a small, informal advisory group throughout the process that included members who had conducted similar surveys and representatives of our target respondent group. They helped with question wording, as well as with identifying a small survey pilot.

Hot Tip: Vetting preliminary findings with stakeholders is fun and helps support evaluation use. We took advantage of an existing gathering of arts stakeholders in Oregon to share and workshop our initial findings. We used a data placemat, complete with re-useable stickers, to slowly reveal the findings. We then engaged the attendees in discussions about how the findings did or didn’t resonate with their experiences. What we learned during this gathering is reflected in our final report.

Resources: We are not the first to try a more inclusive process both in developing our survey tool and in vetting/interpreting the results! Check out the previous aea365 post about participatory data analysis. And check out the Innovation Network’s slide deck on Data Placemats for more information about that particular tool.

The American Evaluation Association is celebrating Oregon Community Foundation (OCF) week. The contributions all this week to aea365 come from OCF team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Ama Nyame-Mensah, and I am a doctoral student in the Social Welfare program at the University of Pennsylvania.

Likert scales are commonly used in program evaluation. However, despite their widespread popularity, Likert scales are often misused and poorly constructed, which can result in misleading evaluation outcomes. Consider the following tips when using or creating Likert scales:

Hot Tip #1: Use the term correctly

A Likert scale consists of a series of statements that measure individual’s attitudes, beliefs, or perceptions about a topic. For each statement (or Likert item), respondents are asked to choose one option from a list of ordered response choices that best aligns with their view. Numeric values are assigned to each answer choice for the purpose of analysis (e.g., 1 = Strongly Disagree, 4 = Strongly Agree). Each respondent’s responses to the set of statements are then combined into a single composite score/variable.

Nyame 1

Hot Tip #2: Label your scale appropriately

To avoid ambiguity, assign a “label” to each response option. Make sure to use ordered labels that are descriptive and meaningful to respondents.

Nyame 2

Hot Tip #3: One statement per item

Avoid including items that consist of multiple statements, but only allow for one answer. Such items can confuse respondents and introduce unnecessary error into your data. Look for the words “and” and “or” as a signal that an item may be double-barreled.

Nyame 3

Hot Tip #4: Avoid multiple negatives

Rephrase negative statements into positive ones. Such statements are confusing and difficult to interpret.

Nyame 4

Hot Tip #5: Keep it balanced

Regardless of whether you use an odd or even number of response choices, include an equal number of positive and negative options for respondents to choose from because an unbalanced scale can produce response bias.

Nyame 5

Hot Tip #6: Provide instructions

Tell respondents how you want them to answer the question. This will ensure that respondents understand and respond to the question as intended.

Nyame 6

Hot Tip #7: Pre-test a new scale

If you create a Likert scale, pre-test it with a small group of coworkers or members of your target population. This can help you determine whether your items are clear, and your scale is reliable and valid.

The Likert scale and items used in this blog post are adopted from the Rosenberg Self-Esteem Scale.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

·

Hi, I’m Ama Nyame-Mensah. I am a doctoral student at the University of Pennsylvania’s School of Social Policy & Practice. In this post, I will share with you some lessons learned about incorporating demographic variables into surveys or questionnaires.

For many, the most important part of a survey or questionnaire is the demographics section. Not only can demographic data help you describe your target audience, but also it can reveal patterns in the data across certain groups of individuals (e.g., gender, income level). So asking the right demographic questions is crucial.

Lesson Learned #1: Plan ahead

In the survey/questionnaire design phase, consider how you will analyze your data by identifying relevant groups of respondents. This will ensure that you collect the demographic information you need. (Remember: you cannot analyze data you do not have!)

Lesson Learned #2: See what others have done

If you are unsure of what items to include in your demographics section, try searching through AEA’s Publications or Google Scholar for research/evaluations being done in a similar area. Using those sources, you can locate links to specific tools or survey instruments that use demographic questions that you would like to incorporate into your our work.

Lesson Learned #3: Let respondents opt out

Allow respondents the option of opting out of the demographics section in its entirety, or, at the very least, make sure to add a “prefer not to answer” option to all demographic questions. In general, it is good practice to include a “prefer not to answer” choice when asking sensitive questions because it may make the difference between a respondent skipping a single question and discontinuing the survey altogether.

Lesson Learned #4: Make it concise, but complete

I learned one of the best lessons in survey/questionnaire design at my old job. We were in the process of revamping our annual surveys, and a steering committee member suggested that we put all of our demographic questions on one page. Placing all of your demographic questions on one page will not only make your survey “feel” shorter and flow better, but it will also push you to think about which demographic questions are most relevant to your work.

Collecting the right demographic data in the right way can help you uncover meaningful and actionable insights.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

I’m Emily Greytak, the Director of Research at GLSEN, a national organization addressing lesbian, gay, bisexual, transgender, and queer (LGBTQ) issues in K-12 education. At GLSEN, we are particularly interested in the experiences of LGBTQ people, but also know that it’s important to identify LGBTQ individuals in even more general evaluation research – whether just as basic descriptive information about the sample, or to examine potential differential experiences.

Lessons Learned: When considering the best ways to identify LGBTQ people in your evaluations, here are four key questions to ask before selecting your measures:

  • What do you want to assess? The LGBTQ population includes identities based on both sexual orientation (LGBQ) and gender identity (T). Sometimes you might want to assess both; other times, one might be more salient. For example, if you want to know about gender differences in use of a resource, sexual orientation may not as necessary to assess whereas gender identity would be. Within each of these broader constructs, there are different elements. For example, do you want to know about sexual identity, same-gender sexual behavior, and/or same-gender sexual attraction – if you are examining an intervention designed to affect sexual activity, then behavior might be the most key.
  • What is your sample? Are you targeting an LGBTQ-specific population or a more general population? The specificity of your measures and variety of your response options might differ. What about age? Language comprehension and vernacular could vary greatly. For example, with youth populations, the identity label “queer” might be fairly commonplace, whereas with older generations, this might still be predominantly considered a slur and could its inclusion could put off respondents.
  • What are your measurement options? Can you include select all options for sexual identity or gender? Can you include definitions for those who need them? Can you use multiple items to identify a construct (e.g. assessing transgender status by asking current gender along with assigned sex)?
  • What can do you with it? Consider your capacity for analysis – e.g., do you have expertise and resources to assess write-in responses? Once you are able to identify LGBTQ people in your sample, what do you plan to do with it? For example, if you aren’t able to examine differences between transgender males and females, perhaps a simpler transgender status item is sufficient (as opposed a measure that allows for gender-specific responses).

Once you answer these questions, then you can move on to selecting your specific measures. Use the Rad Resources for guidance and best practices.

Rad Resources:

Best Practices for Asking About Sexual Orientation

Best Practices for Asking Questions to Identify Transgender and Other Gender Minority Respondents

Making Your Evaluation Inclusive: A Practical Guide for Evaluation Research with LGBTQ People

The American Evaluation Association is celebrating LGBT TIG Week with our colleagues in the Lesbian, Gay, Bisexual & Transgender Issues Topical Interest Group. The contributions all this week to aea365 come from our LGBT TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Welcome to the Lesbian, Gay, Bisexual and Transgender Evaluation Topical Interest Group (LGBT TIG) week on aea365! My name is Leia K. Cain, and I’m an instructor at the University of South Florida in the Educational Measurement and Research program. This week, I’m acting as the coordinator for the LGBT TIG’s blog posts.

One area of measurement in evaluation work that I feel really strongly about is the use of binaries. When you think about sexualities, do you only think of gay or straight? Homosexual or heterosexual? What if I told you that there were so many more categories in the in-between areas?

Lesson Learned: After reading Judith Butler’s work, I started working through the binaries under which my own thinking is structured. I still catch myself falling into binary thought categories sometimes, but I constantly work to “queer” my understanding of whatever the topic is at hand – I break apart my understanding and try to examine it.

In my particular line of work, I have examined the affect that outness has on the experiences and perceptions of LGBTQ individuals. However, I didn’t just ask participants if they were out or not – instead, I asked them to rate their outness on a scale from 1-6, where 1 meant “not at all out” and 6 meant “completely out.” This is similar to the Kinsey Scale; a scale created by Dr. Alfred Kinsey, who measured sexuality on a seven-point scale with categories ranging from 0-6.

I encourage thinking about how binaries could be stifling your evaluation and research work as well. After all, the world isn’t black or white, 0 or 1, or right and wrong. If you aren’t measuring the identities that fill the spaces in between, are you really reaching your entire audience?

Rad Resource: For more information on the Kinsey Scale, check out the Kinsey Institute’s webpage.

The American Evaluation Association is celebrating LGBT TIG Week with our colleagues in the Lesbian, Gay, Bisexual & Transgender Issues Topical Interest Group. The contributions all this week to aea365 come from our LGBT TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Older posts >>

Archives

To top