TAG | surveys
Hi! I’m Silvana Bialosiewicz, an advanced doctoral student at Claremont Graduate University (CGU) and Senior Research Associate at the Claremont Evaluation Center. My goal as an applied researcher is to help develop and disseminate “best-practices” for high-quality evaluation of programs that serve children. Today I’d like to share some strategies for collecting valid and reliable data from young children.
Research on youth-program evaluation and child development reveal that:
- Children less than nine years old possess limited abilities to accurately self-report, especially by way of written surveys
- Previously validated measures are not always appropriate for diverse samples of children
Therefore, a critical step in the process of designing evaluations of youth programs is the development and/or choosing of measures that are sensitive to children’s language skills, reading and writing abilities, and life experiences.
Hot Tip: Consider using alternatives to written surveys, such as interviews, when collecting data from children less than nine years old. If written surveys are used, be cognizant of young children’s inability to understand complex questions and accurately recall past experiences. Surveys for young children should be orally administered, use simple language, and use response options that children can easily understand.
Hot Tip: Do not assume that a measure, which has been demonstrated to be valid in a previous study, is appropriate for your participants, especially when the program serves a diverse population of children. The majority of psychological measures for children have been developed and normed on samples of high SES Caucasian children and cannot be assumed to be valid and reliably for diverse samples of children (i.e. English Language Learners, ethnic and cultural minorities, children with physical or sensory disabilities).
Hot Tip: Pilot test your measures, even previously validated measures, before launching full scale data collection to ensure developmental and contextual appropriateness.
Rad Resources: Researching with Children & Young People by Tisdall, Davis, & Gallagher and Through the Eyes of the Child: Obtaining Self-Reports from Children by La Greca are two great books for anyone looking to expand their knowledge on this topic.
Other AEA365 posts on this topic:
The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PK12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Welcome to the Evaluation 2013 Conference Local Arrangements Working Group (LAWG) week on aea365. My name is Fatima Frank. I am the Evaluation Specialist for evalû, a small consulting firm that focuses exclusively on rigorous evaluations of social and economic development initiatives. I was recently part of a discussion with fellow evaluators on how organizations with limited resources can utilize common and affordable technological tools to improve on-the-ground data collection. This happens to be an area in which evalû has had great success.
Lessons Learned: Using paper surveys for data collection can create big hassles and inefficiencies. In addition to spending valuable time on data entry, transferring data from paper surveys to an electronic database leaves a lot of room for errors. And, really, who wants to carry, organize, and track hundreds of paper surveys?
Rad Resource: We have become big fans of EpiSurveyor (soon-to-be Magpi), an open access mobile technology tool for data collection. For those who aren’t familiar with Episurveyor, it’s a mobile phone and web-based data collection system developed and supported in Kenya, and used by hundreds of organizations in over 170 countries. Below are a few reasons why we use EpiSurveyor:
- It’s free!
- Extremely user friendly. Our evaluators are all self-taught users. You can also easily transfer knowledge and skills to field staff and enumerators. We’ve found that field offices are very proactive in adopting EpiSurveyor, and often tell us that they plan on using it beyond our engagement with them.
- Managing data collection and oversight is easy since submitted data can be viewed in real time on the EpiSurveyor server. Data or project managers can view the data in their EpiSurveyor account as it’s being sent from anywhere!
- The GPS function allows data and project managers to track geographic data, follow enumerators, and make sure the project and evaluation target area is being adequately covered.
- Check out our full review here.
Hot Tip—Insider’s advice for Evaluation 2013 in DC: The 2013 AEA conference is coming to DC! DC has an abundance of free museums and one of my favorites is The National Portrait Gallery in Chinatown. Be sure to check out the lovely Kogod Courtyard. Afterwards, stroll into Chinatown for a bite to eat.
We’re thinking forward to October and the Evaluation 2013 annual conference all this week with our colleagues in the Local Arrangements Working Group (LAWG). AEA is accepting proposals to present at Evaluation 2013 through until March 15 via the conference website. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.
Hello! We are Monica Hargraves and Miranda Fang, from the Cornell Office for Research on Evaluation. We presented together at Eval2012 would like to share some practical tips on literature searches in the context of evaluation.
Program managers often face an expectation worthy of Hercules: to provide strong research-quality evidence that their program is effective in producing valuable outcomes. This is daunting, particularly if the valued outcomes only emerge over a long time horizon, the program is new or small, or the appropriate evaluation is way beyond the capacity of the program. The question is, what can bridge the gap between what’s feasible for the program and what’s needed in terms of evidence?
Hot Tip: Strategic literature searches can help. And visual program logic models provide an ideal framework for organizing the search process.
Quoting our colleagues Jennifer Urban and William Trochim in their AJE 2009 paper on the Golden Spike,
“The golden spike is literally a place that can be drawn on the visual causal map … where the evaluation results and the research evidence meet.”
We use pathway models, which build on a columnar logic model and tell the logical story of the program by specifying the connections between the activities and the short-term outcome(s) they each contribute to, and the subsequent short- or mid-term outcome(s) that those lead to, and so on. What emerges is a visual program theory with links all the way through to the program’s anticipated long-term outcomes.
The visual model organizes and makes succinct the key elements of the program theory. It helps an evaluator to zero in on the particular outcomes and causal links that are needed in order to build credible evidence beyond the scope of their current evaluation.
Here’s an example, from a Cornell Cooperative Extension program on energy conservation in a youth summer camp. Suppose the program needs to report to a key funder whose interest is in youth careers in the environmental sector. If the program evaluation demonstrates that the program is successful in building a positive attitude towards green energy careers, then a literature search can focus on evidence for the link (where the red star is) between that mid-term outcome and the long-term outcome of an increase in youth entering the green workforce.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to firstname.lastname@example.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
We are Jan Losby (a CDC employee) and Anne Wetmore (a former ORISE Fellow at CDC), members of the Evaluation and Program Effectiveness Team in the Division for Heart Disease and Stroke Prevention at the Centers for Disease Control and Prevention (CDC).
Inevitably when you start creating a survey and you are using a Likert scale, you’ll ask yourself “Which should I use, an odd- or even-numbered scale?” To help you decide which might be best for you, let’s look at the advantages and disadvantages of each:
Hot Tip #1: Choosing odd–all odd-numbered scales have a middle value, often labeled “neither,” “neutral,” or “undecided.” However, even when the mid-point is labeled, respondents may each have a different interpretation of this response category. In a 2009 study, it was shown that possible interpretations of the mid-point can be quite numerous (see link below).
Advantages of odd-numbered scales
- Can be appealing to respondents since there is an easy option to select
- If topic is highly sensitive, may be best to offer neutral point
Disadvantages of odd-numbered scales
- People may be less discriminating in response (respondents don’t take time to carefully consider all of the various response categories)
- May not be collecting accurate responses (the mid-point can mean different things to different people)
Hot Tip #2: Choosing even–for all even-numbered scales the neutral middle option is removed. This is sometimes called a “forced choice” method since the neutral option is not available to respondents.
Advantages of even-numbered scales
- People may be more discriminating, be more thoughtful
- Eliminates possible misinterpretation of mid-point
Disadvantages of even-numbered scales
- Respondents could become frustrated and not complete the survey
- May not be collecting accurate responses if respondents feel/or perceive being required to make a selection
There isn’t a simple rule regarding when to use odd or even, ultimately that decision should be informed by (a) your survey topic, (b) what you know about your respondents, (c) how you plan to administer the survey, and (d) your purpose. Take time to consider these four elements coupled with the advantages and disadvantages of odd/even, and you will likely reach a decision that works best for you.
- A 2009 AEA presentation on interpreting the mid-point “stuck in the middle.”
- A good list of Likert-type response categories for 3- to 7-point scales.
- A short resource on Likert scales – description and examples.
- A 2004 study looking at the effect of scale format (using 3 to 9 categories) on the reliability of Likert-type rating scales. (requires account)
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
My name is Ellen Steiner, Director of Market Research and Evaluation at Energy Market Innovations, a research-based consultancy focused on strategic program design and evaluation for the energy efficiency industry – we work to create an energy future that is sustainable for coming generations.
An increasingly common practice…
In energy efficiency program evaluations, telephone surveys are traditionally the mode of choice. However, there are many reasons that evaluators are increasingly interested in pursuing online surveys including the potential for:
(1) lower costs,
(2) increased sample sizes,
(3) more rapid deployment, and
(4) enhanced respondent convenience.
With online surveys, fielding costs are often lower and larger sample sizes can be reached cost-effectively. Larger sample sizes result in greater accuracy and can support increased segmentation of the sample. Online surveys also take less time to be fielded and can be completed at the respondent’s convenience.
Yet be aware…
In contrast, there are still many concerns regarding the validity and reliability of online surveys. Disadvantages of online surveys potentially include:
(1) respondent bias,
(2) response rate issues,
(3) normative effects, and
(4) cognitive effects.
Certain populations are less likely to have Internet access or respond to an Internet survey, which poses a generalizability threat. Although past research indicates that online response rates often are equal or slightly higher than that of traditional modes, Internet users are increasingly exposed to online survey solicitations, necessitating researchers employ creative and effective strategies for garnering participation. In addition, normative and cognitive challenges related to not having a trained interviewer present to clarify and probe which may lead to less reliable data.
Come talk with us at AEA!
My colleague, Jess Chandler and I will be presenting a session at the AEA conference titled “Using Online Surveys and Telephone Surveys for a Commercial Energy Efficiency Program Evaluation: A Mode Effects Experiment,” in which we will discuss the findings from a recent study we conducted comparing online to telephone surveys. We hope you can join us and share your experiences with online surveys!
- Email Address Availability – In our experience, if you do not have email addresses for the majority of the population from which you want to sample, the cost benefits of an internet sample are cancelled out by the time spent seeking out or trying to purchase email addresses.
- Mode Effects Pilot Studies – Where possible, conducting a pilot study using a randomized controlled design where two or more samples are drawn from the same population and each sample is given the survey in a different mode is a best practice to understand the potential limitations of an online survey specific to the population under study.
The American Evaluation Association is celebrating the Business, Leadership, and Performance TIG (BLP) Week. The contributions all week come from BLP members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to email@example.com. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.