AEA365 | A Tip-a-Day by and for Evaluators

TAG | Likert scale

My name is Ama Nyame-Mensah, and I am a doctoral student in the Social Welfare program at the University of Pennsylvania.

Likert scales are commonly used in program evaluation. However, despite their widespread popularity, Likert scales are often misused and poorly constructed, which can result in misleading evaluation outcomes. Consider the following tips when using or creating Likert scales:

Hot Tip #1: Use the term correctly

A Likert scale consists of a series of statements that measure individual’s attitudes, beliefs, or perceptions about a topic. For each statement (or Likert item), respondents are asked to choose one option from a list of ordered response choices that best aligns with their view. Numeric values are assigned to each answer choice for the purpose of analysis (e.g., 1 = Strongly Disagree, 4 = Strongly Agree). Each respondent’s responses to the set of statements are then combined into a single composite score/variable.

Nyame 1

Hot Tip #2: Label your scale appropriately

To avoid ambiguity, assign a “label” to each response option. Make sure to use ordered labels that are descriptive and meaningful to respondents.

Nyame 2

Hot Tip #3: One statement per item

Avoid including items that consist of multiple statements, but only allow for one answer. Such items can confuse respondents and introduce unnecessary error into your data. Look for the words “and” and “or” as a signal that an item may be double-barreled.

Nyame 3

Hot Tip #4: Avoid multiple negatives

Rephrase negative statements into positive ones. Such statements are confusing and difficult to interpret.

Nyame 4

Hot Tip #5: Keep it balanced

Regardless of whether you use an odd or even number of response choices, include an equal number of positive and negative options for respondents to choose from because an unbalanced scale can produce response bias.

Nyame 5

Hot Tip #6: Provide instructions

Tell respondents how you want them to answer the question. This will ensure that respondents understand and respond to the question as intended.

Nyame 6

Hot Tip #7: Pre-test a new scale

If you create a Likert scale, pre-test it with a small group of coworkers or members of your target population. This can help you determine whether your items are clear, and your scale is reliable and valid.

The Likert scale and items used in this blog post are adopted from the Rosenberg Self-Esteem Scale.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

·

Hello, I am Barb Goldsby, Interim Director of the Exceptional Student Services Unit and Supervisor of the Secondary Transition Services Team from the Colorado Department of Education. We truly value meaningful evaluations of our training, technical assistance and professional development.  With the help of the National Secondary Transition Technical Assistance Center (NSTTAC), we have taken Guskey’s evaluation components to heart.

Lesson Learned:

  • Participants Don’t Always Know What They Don’t Know. In the area of participant learning (Level 2), we discovered that pre/post evaluations were the best way to measure participants’ learning.  However, we discovered that when participants completed the pre evaluation before the content was presented, they had a tendency to rate themselves high because they simply didn’t know what they didn’t know.  After the training, they realized what they didn’t know and would rate themselves lower.

Hot Tip:

  • Conduct a Post-then-Pre Test. To combat this skewing of data, we developed a pre/post evaluation that participants would complete after the content was given.  This way, they could truly think about what their knowledge base was pre-content and post-content.  We use a Likert-like scale of 1 to 5 with 1 being a low level of knowledge and skills to 5 being a high level of knowledge and skills.

  • Shared Experience and Resources Help Improve Practices. We utilize this evaluation system in all of our trainings and then use the data to inform future trainings.  Different teams across our unit are now utilizing this evaluation system in their trainings.  Here is an example of the evaluation form we use with participants and the system used to analyze the evaluation data from our trainings.

Rad Resources:

The American Evaluation Association is celebrating the Evaluating Professional Development Community of Practice (PDCoP) Week. The contributions all week come from PDCoP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluator.

· · · ·

My name is Gary Resnick and I am the Director of Research at Harder+Company Community Research, a California-based consulting firm. My background combines program evaluation with child development research, and I have an interest in system theory and networks.

Harder+Company has been involved evaluating First 5 programs in a number of California counties. First 5 arose from 1998 Proposition 10, adding a tax on tobacco products with funds distributed to counties to fund local programs that improve services for children birth to 5 and their families. An important goal of First 5 funding is to act as a catalyst for change in each county’s systems of care. To measure system change, we focused on inter-agency coordination and collaboration. Increases in coordination and collaboration would indicate that agencies are better able to share resources and clients, reduce redundancies and service gaps, and increase efficiency.

Rad Resource: The Levels of Collaboration Scale assesses collaboration, has excellent psychometric properties and can be administered in web-based surveys to agency respondents. To see it in action, check out this article in the American Journal of Evaluation. Originally a 5-point Likert scale, we combined the two highest scale points creating a 4-point scale to make it easier for respondents.

Hot Tip: Start by defining the network member agencies using objective, clear, and unbiased criteria. Later, you can expand the network by asking respondents to nominate up to three additional agencies with whom they interact.

Hot Tip: Select at least two respondents from each organization, three is better, from different levels of the organization, administrators and managers as well as direct line staff.

Lesson Learned: It is important to have complete, reciprocal ratings for each agency (even if not from all respondents). If you have too much missing data at the agency level, consider excluding the agency from the network.

Hot Tip: Use Netdraw, a Windows freeware program, to produce two-dimensional network maps from agency-level Collaboration Scale ratings. See our maps here. The maps identify agencies most involved with other agencies at the center of the map (key players) and those least involved, at the periphery of the network. Add attributes of agencies (e.g. geographic region served) to map subgroups of your network.

Hot Tip: Produce two sets of maps, one with no agency labels for public reporting, and another with agency labels, for internal discussions with clients and agencies. Convene a meeting with the agency respondents and show them the maps with agency labels, to help them understand where they stand in the network and to foster collaboration.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

· · · · ·

Archives

To top