AEA365 | A Tip-a-Day by and for Evaluators

TAG | survey design

This is Heather Esper, senior program manager, and Yaquta Fatehi, senior research associate, from the Performance Measurement Initiative at the William Davidson Institute at the University of Michigan. Our team specializes in performance measurement to improve organizations’ effectiveness, scalability, and sustainability and to create more value for their stakeholders in emerging economies.

Our contribution to social impact measurement (SIM) focuses on assessing poverty outcomes in a multi-dimensional manner. But what do we mean by multi-dimensional? For us, this refers to three things. It first means speaking to all local stakeholders when assessing change by a program or market-based approach in the community. This includes not only stakeholders that interact directly with the organization, such as customers or distributors from low-income households, but also those that do not engage with the venture  ?  like farmers who do not sell their product to the venture, or non-customers. Second, this requires moving beyond measuring only economic outcome indicators; it includes studying changes in capability and relationship well-being of local stakeholders. Capability refers to constructs such as the individual’s health, agency, self-efficacy, and self-esteem. Relationship well-being refers to changes in the individual’s role in the family and community and also in the quality of the local physical environment. Thirdly, multi-dimensional outcomes means assessing positive as well as negative changes on stakeholders and on the local physical and cultural environment.

We believe assessing multidimensional outcomes better informs internal decision-making. For example, we conducted an impact assessment with a last-mile distribution venture and focused on understanding the relationship between business and social outcomes. We found a relationship between self-efficacy and sales, and self-efficacy and turnover, meaning if the venture followed our recommendation to improve sellers’ self-efficacy through trainings, they would also likely see an increase in sales and retention.

Rad Resources:

  1. Webinar with the Grameen Foundation on the value of capturing multi-dimensional poverty outcomes
  2. Webinar with SolarAid on qualitative methods to capture multi-dimensional poverty outcomes
  3. Webinar with Danone Ecosystem Fund on quantitative methods to capture multi-dimensional poverty outcomes

Hot Tips:  Key survey development best practices:

  1. Start with existing questions developed and tested by other researchers when possible and modify as necessary with a pretest.
  2. Pretest using cognitive interviewing methodology to ensure a context-specific survey and informed consent. We tend to use a sample size of at least 12.
  3. For all relevant questions, test reliability and variability using the data gathered from the pilot. We tend to use a sample size of at least 25 to conduct analysis, such as Cronbach’s alpha of multi-item scale questions).

The American Evaluation Association is celebrating Social Impact Measurement Week with our colleagues in the Social Impact Measurement Topical Interest Group. The contributions all this week to aea365 come from our SIM TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Greetings AEA colleagues. We are Carla Hillerns and Pei-Pei Lei – survey enthusiasts in the Office of Survey Research at the University of Massachusetts Medical School. In 2014, we shared a post about effective email subject lines for internet survey invitations. Today we’d like to focus on the body of the email. Here are strategies for writing email invitations that motivate recipients to participate in your survey.

Hot Tips:

  • Personalize the salutation. Whenever possible, begin the invitation with the recipient’s name, such as “Dear Carla Hillerns” or “Dear Ms. Lei.” Personalization helps people know that they’re the intended recipient of the invitation.
  • Do not bury the lead. Use the first line or two of the email to invite the recipient to take the survey. Some people might open your email on mobile devices, which have significantly smaller screen sizes than most computers.
  • Include the essentials. A survey invitation should accomplish the following:
    • Explain why the individual is chosen for the survey
    • Request participation in the survey
    • Explain why participation is important
    • Provide clear instructions for accessing the survey
    • Address key concerns, such as confidentiality, and provide a way for recipients to ask questions about the survey, such as a telephone number and email address
    • Express appreciation
    • Include sender information that conveys the survey’s legitimacy and significance
  • Less is more. The most frequent problem we’ve seen is an overly wordy invitation. Follow the modified KISS principle – Keep It Short and Simple. Common issues that complicate invitations are:
    • Overlong sentences
    • Redundant points
    • Extra background details
    • Cryptic wording, such as acronyms and technical jargon
    • Intricate instructions for accessing and/or completing the survey

Cool Trick:

  • Pre-notify, if appropriate. Examples of pre-notifications include an advance letter from a key sponsor or an announcement at a meeting. Pre-notification can be a great way to relay compelling information about the survey so that the email invitation can focus on its purpose.

Lesson Learned:

Rad Resources:

  • Emily Lauer and Courtney Dutra’s AEA365 post on using Plain Language offers useful tips that can be applied to all aspects of survey design and implementation, including the initial invitation email, any reminders emails, and the survey itself.
  • Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method, 4th Edition by Don A. Dillman, Jolene D. Smyth, and Leah Melani Christian provides lots of helpful guidance for crafting invitations and implementing internet surveys.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

I am Holly Kipp, Researcher, from The Oregon Community Foundation (OCF). Today’s post shares some of what we’re learning through our efforts to measure social-emotional learning (SEL) in youth in the context of our K-12 Student Success Initiative.

The Initiative, funded in partnership with The Ford Family Foundation, aims to help close the achievement gap among students in Oregon by supporting expansion and improvement of out-of-school time programs for middle school students.

Through our evaluation of the Initiative, we are collecting information about program design and improvement, students and their participation, and student and parent perspectives. One of our key data sources is a survey of students about their social-emotional learning (SEL).

Rad Resources: There are a number of places where you can learn more about SEL and its measurement. Some key resources include:

  • The Collaborative for Academic Social and Emotional Learning, or CASEL
  • The University of Chicago Consortium on School Research, in particular their Students & Learning page

In selecting a survey tool, we wanted to ensure the information collected would be useful both for our evaluation and for our grantees. By engaging grantee staff in our process of tool selection, they had a direct stake in the process and would hopefully buy-in to using the tool we chose – not only for our evaluation efforts but for their ongoing program improvement processes. 

Hot Tip: Engage grantee staff directly in vetting and adapting a tool.

We first mined grantee logic models for their outcomes of interest, reviewed survey tools already in use by grantees, and talked with grantees about what they wanted and needed to learn. We then talked with grantees about the frameworks and tools we were exploring in order to get their feedback.

We ultimately selected and adapted The Youth Skills and Beliefs Survey developed by the Youth Development Executives of King County (YDEKC) with support from American Institutes for Research.

Rad Resource: YDEKC has made available lots of information about their survey, the constructs it measures, and how they developed the tool.

Rad Resource: There are several other well-established tools worth exploring, such as the DESSA (or DESSA-mini) and DAP and related surveys, especially if cost is not a critical factor.

Hot Tip: Student surveys aren’t the only way to measure SEL! Consider more qualitative and participatory approaches to understanding student social-emotional learning.

Student surveys are only one approach to measuring SEL. We are also working with our grantees to engage students in photo voice projects that explore concepts of identity and belonging – elements that are more challenging to measure well with a survey.

Rad Resource: AEA’s Youth Focused TIG is a great resource for youth focused and participatory methods.

The American Evaluation Association is celebrating Oregon Community Foundation (OCF) week. The contributions all this week to aea365 come from OCF team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· ·

We are Caitlin Ruffenach, Researcher, and Kim Leonard, Senior Evaluation Officer, from The Oregon Community Foundation (OCF). Among other things, we are working on an evaluation of the Studio to School Initiative at OCF, which focuses on the development of sustainable arts education programs through partnerships between arts organizations and schools.

This past summer, in collaboration with the Oregon Arts Commission, we conducted a survey of arts organizations in Oregon in an effort to learn about the arts education programming they provide, often in concert with what is available more directly through the school system.

The purpose of this survey was to help the Foundation understand how the grantees of its Studio to School Initiative fit into the broader arts education landscape in Oregon. We hope the survey results will also serve as a resource for grantees, funders, and other stakeholders to understand and identify programs delivering arts education throughout the state.

Lesson Learned: To ensure we would have the most useful information possible, our survey design process included several noteworthy steps:

  1. We started with existing data; by gathering information about organizations who had received funding in arts education in Oregon in the past we were able to target our efforts to recruit respondents.
  2. We consulted with others who have done similar surveys to learn from their successes and challenges;
  3. We paid close attention to survey question wording to ensure that we were focusing as tightly on what was measurable by survey as possible; and
  4. We vetted our early findings with arts education stakeholders.

Hot Tip: A collaborative, inclusive survey design process can result in better survey tools. We used a small, informal advisory group throughout the process that included members who had conducted similar surveys and representatives of our target respondent group. They helped with question wording, as well as with identifying a small survey pilot.

Hot Tip: Vetting preliminary findings with stakeholders is fun and helps support evaluation use. We took advantage of an existing gathering of arts stakeholders in Oregon to share and workshop our initial findings. We used a data placemat, complete with re-useable stickers, to slowly reveal the findings. We then engaged the attendees in discussions about how the findings did or didn’t resonate with their experiences. What we learned during this gathering is reflected in our final report.

Resources: We are not the first to try a more inclusive process both in developing our survey tool and in vetting/interpreting the results! Check out the previous aea365 post about participatory data analysis. And check out the Innovation Network’s slide deck on Data Placemats for more information about that particular tool.

The American Evaluation Association is celebrating Oregon Community Foundation (OCF) week. The contributions all this week to aea365 come from OCF team members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Ama Nyame-Mensah, and I am a doctoral student in the Social Welfare program at the University of Pennsylvania.

Likert scales are commonly used in program evaluation. However, despite their widespread popularity, Likert scales are often misused and poorly constructed, which can result in misleading evaluation outcomes. Consider the following tips when using or creating Likert scales:

Hot Tip #1: Use the term correctly

A Likert scale consists of a series of statements that measure individual’s attitudes, beliefs, or perceptions about a topic. For each statement (or Likert item), respondents are asked to choose one option from a list of ordered response choices that best aligns with their view. Numeric values are assigned to each answer choice for the purpose of analysis (e.g., 1 = Strongly Disagree, 4 = Strongly Agree). Each respondent’s responses to the set of statements are then combined into a single composite score/variable.

Nyame 1

Hot Tip #2: Label your scale appropriately

To avoid ambiguity, assign a “label” to each response option. Make sure to use ordered labels that are descriptive and meaningful to respondents.

Nyame 2

Hot Tip #3: One statement per item

Avoid including items that consist of multiple statements, but only allow for one answer. Such items can confuse respondents and introduce unnecessary error into your data. Look for the words “and” and “or” as a signal that an item may be double-barreled.

Nyame 3

Hot Tip #4: Avoid multiple negatives

Rephrase negative statements into positive ones. Such statements are confusing and difficult to interpret.

Nyame 4

Hot Tip #5: Keep it balanced

Regardless of whether you use an odd or even number of response choices, include an equal number of positive and negative options for respondents to choose from because an unbalanced scale can produce response bias.

Nyame 5

Hot Tip #6: Provide instructions

Tell respondents how you want them to answer the question. This will ensure that respondents understand and respond to the question as intended.

Nyame 6

Hot Tip #7: Pre-test a new scale

If you create a Likert scale, pre-test it with a small group of coworkers or members of your target population. This can help you determine whether your items are clear, and your scale is reliable and valid.

The Likert scale and items used in this blog post are adopted from the Rosenberg Self-Esteem Scale.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

·

Hi, I’m Ama Nyame-Mensah. I am a doctoral student at the University of Pennsylvania’s School of Social Policy & Practice. In this post, I will share with you some lessons learned about incorporating demographic variables into surveys or questionnaires.

For many, the most important part of a survey or questionnaire is the demographics section. Not only can demographic data help you describe your target audience, but also it can reveal patterns in the data across certain groups of individuals (e.g., gender, income level). So asking the right demographic questions is crucial.

Lesson Learned #1: Plan ahead

In the survey/questionnaire design phase, consider how you will analyze your data by identifying relevant groups of respondents. This will ensure that you collect the demographic information you need. (Remember: you cannot analyze data you do not have!)

Lesson Learned #2: See what others have done

If you are unsure of what items to include in your demographics section, try searching through AEA’s Publications or Google Scholar for research/evaluations being done in a similar area. Using those sources, you can locate links to specific tools or survey instruments that use demographic questions that you would like to incorporate into your our work.

Lesson Learned #3: Let respondents opt out

Allow respondents the option of opting out of the demographics section in its entirety, or, at the very least, make sure to add a “prefer not to answer” option to all demographic questions. In general, it is good practice to include a “prefer not to answer” choice when asking sensitive questions because it may make the difference between a respondent skipping a single question and discontinuing the survey altogether.

Lesson Learned #4: Make it concise, but complete

I learned one of the best lessons in survey/questionnaire design at my old job. We were in the process of revamping our annual surveys, and a steering committee member suggested that we put all of our demographic questions on one page. Placing all of your demographic questions on one page will not only make your survey “feel” shorter and flow better, but it will also push you to think about which demographic questions are most relevant to your work.

Collecting the right demographic data in the right way can help you uncover meaningful and actionable insights.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

I’m Emily Greytak, the Director of Research at GLSEN, a national organization addressing lesbian, gay, bisexual, transgender, and queer (LGBTQ) issues in K-12 education. At GLSEN, we are particularly interested in the experiences of LGBTQ people, but also know that it’s important to identify LGBTQ individuals in even more general evaluation research – whether just as basic descriptive information about the sample, or to examine potential differential experiences.

Lessons Learned: When considering the best ways to identify LGBTQ people in your evaluations, here are four key questions to ask before selecting your measures:

  • What do you want to assess? The LGBTQ population includes identities based on both sexual orientation (LGBQ) and gender identity (T). Sometimes you might want to assess both; other times, one might be more salient. For example, if you want to know about gender differences in use of a resource, sexual orientation may not as necessary to assess whereas gender identity would be. Within each of these broader constructs, there are different elements. For example, do you want to know about sexual identity, same-gender sexual behavior, and/or same-gender sexual attraction – if you are examining an intervention designed to affect sexual activity, then behavior might be the most key.
  • What is your sample? Are you targeting an LGBTQ-specific population or a more general population? The specificity of your measures and variety of your response options might differ. What about age? Language comprehension and vernacular could vary greatly. For example, with youth populations, the identity label “queer” might be fairly commonplace, whereas with older generations, this might still be predominantly considered a slur and could its inclusion could put off respondents.
  • What are your measurement options? Can you include select all options for sexual identity or gender? Can you include definitions for those who need them? Can you use multiple items to identify a construct (e.g. assessing transgender status by asking current gender along with assigned sex)?
  • What can do you with it? Consider your capacity for analysis – e.g., do you have expertise and resources to assess write-in responses? Once you are able to identify LGBTQ people in your sample, what do you plan to do with it? For example, if you aren’t able to examine differences between transgender males and females, perhaps a simpler transgender status item is sufficient (as opposed a measure that allows for gender-specific responses).

Once you answer these questions, then you can move on to selecting your specific measures. Use the Rad Resources for guidance and best practices.

Rad Resources:

Best Practices for Asking About Sexual Orientation

Best Practices for Asking Questions to Identify Transgender and Other Gender Minority Respondents

Making Your Evaluation Inclusive: A Practical Guide for Evaluation Research with LGBTQ People

The American Evaluation Association is celebrating LGBT TIG Week with our colleagues in the Lesbian, Gay, Bisexual & Transgender Issues Topical Interest Group. The contributions all this week to aea365 come from our LGBT TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Welcome to the Lesbian, Gay, Bisexual and Transgender Evaluation Topical Interest Group (LGBT TIG) week on aea365! My name is Leia K. Cain, and I’m an instructor at the University of South Florida in the Educational Measurement and Research program. This week, I’m acting as the coordinator for the LGBT TIG’s blog posts.

One area of measurement in evaluation work that I feel really strongly about is the use of binaries. When you think about sexualities, do you only think of gay or straight? Homosexual or heterosexual? What if I told you that there were so many more categories in the in-between areas?

Lesson Learned: After reading Judith Butler’s work, I started working through the binaries under which my own thinking is structured. I still catch myself falling into binary thought categories sometimes, but I constantly work to “queer” my understanding of whatever the topic is at hand – I break apart my understanding and try to examine it.

In my particular line of work, I have examined the affect that outness has on the experiences and perceptions of LGBTQ individuals. However, I didn’t just ask participants if they were out or not – instead, I asked them to rate their outness on a scale from 1-6, where 1 meant “not at all out” and 6 meant “completely out.” This is similar to the Kinsey Scale; a scale created by Dr. Alfred Kinsey, who measured sexuality on a seven-point scale with categories ranging from 0-6.

I encourage thinking about how binaries could be stifling your evaluation and research work as well. After all, the world isn’t black or white, 0 or 1, or right and wrong. If you aren’t measuring the identities that fill the spaces in between, are you really reaching your entire audience?

Rad Resource: For more information on the Kinsey Scale, check out the Kinsey Institute’s webpage.

The American Evaluation Association is celebrating LGBT TIG Week with our colleagues in the Lesbian, Gay, Bisexual & Transgender Issues Topical Interest Group. The contributions all this week to aea365 come from our LGBT TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

We are Carla Hillerns and Pei-Pei Lei from the Office of Survey Research at the University of Massachusetts Medical School’s Center for Health Policy and Research. We’d like to discuss a common mistake in surveys – double-barreled questions. As the name implies, a double-barreled question asks about two topics, which can lead to issues of interpretation as you’re not sure if the person is responding to the first ‘question’, the second ‘question’ or both. Here is an example:

Was the training session held at a convenient time and location?          Yes          No

A respondent may have different opinions about the time and location of the session but the question only allows for one response. You may be saying to yourself, “I’d never write a question like that!” Yet double barreling is a very easy mistake to make, especially when trying to reduce the overall number of questions on a survey. We’ve spotted double (and even triple) barreled questions in lots of surveys – even validated instruments.

Hot Tips: For Avoiding Double-Barreled Questions:

  1. Prior to writing questions, list the precise topics to be measured. This step might seem like extra work but can actually make question writing easier.
  2. Avoid complicated phrasing. Using simple wording helps identify the topic of the question.
  3. Pay attention to conjunctions like “and” and “or.” A conjunction can be a red flag that your question contains multiple topics.
  4. Ask colleagues to review a working draft of the survey specifically for double-barreled questions (and other design problems). We call this step “cracking the code” because it can be a fun challenge for internal reviewers.
  5. Test the survey. Use cognitive interviews and/or pilot tests to uncover possible problems from the respondent’s perspective. See this AEA365 post for more information on cognitive interviewing.

Rad Resource: Our go-to resource for tips on writing good questions is Internet, phone, mail, and mixed-mode surveys: The tailored design method by Dillman, Smith & Christian.

Lessons Learned:

  1. Never assume. Even when we’re planning on using a previously tested instrument, we still set aside time to review it for potential design problems.
  2. Other evaluators can provide valuable knowledge about survey design. Double-barreled questions are just one of the many common errors in survey design. Other examples include leading questions and double negatives. We hope to see future AEA blogs that offer strategies to tackle these types of problems. Or please consider writing a comment to this post if you have ideas you’d like to share. Thank you!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, we are Pei-Pei Lei and Carla Hillerns from the Office of Survey Research at the University of Massachusetts Medical School. Have you ever selected an answer to a survey question without reading all of the choices? Maybe you paid more attention to the first choice than the rest? Today, we’d like to share a technique that helps to minimize the impact of these types of scenarios – randomized ordering of survey response options.

Randomizing the order of response options may improve data quality by reducing the order effect in your survey. When there is a list of response options, respondents often have a tendency of selecting the most prominent. For example, in a paper survey, the first option may be most apparent. In a phone survey, the last option may be most memorable. If implementing an online survey, there may be a tendency to choose from the middle of a long list – because the center is more prominent.

By randomizing the order, all options have the same possibility of appearing in each response position. In Example A below, “Direct mail” appears in the top spot. However, in Example B, the responses have been randomly reassigned and “Television” now appears at the top.

Lei

Hot Tips:

  • Do not randomize the order if the response options are better suited to a pre-determined sequence, such as months of the year or alphabetization, or if using a validated instrument that needs to maintain the full survey as developed.
  • If the response list is divided into sub-categories, you can randomize the category order as well as the items within each category.
  • If your list includes “Other (Please specify: __________)” or “None of the above”, keep these at the bottom so the question makes sense!
  • If using the same set of response options for multiple questions, apply the first randomized ordering to the subsequent questions to avoid confusion.
  • Randomization is not a cure for all questionnaire design challenges. For example, respondents probably won’t pay as much attention to each response option if the list is extremely long or the options are excessively wordy. So be reasonable in your design.

Lesson Learned: It’s easy to administer randomization in web and telephone surveys if your survey platform supports this function. A mail survey will require multiple versions of the questionnaire. You’ll also need to account for these multiple versions as part of the data entry process to ensure that responses are coded accurately. 

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Older posts >>

Archives

To top