AEA365 | A Tip-a-Day by and for Evaluators

TAG | surveys

Hello! My name is Valerie Futch Ehrlich and I am the Evaluation and Research Lead for the Societal Advancement group at the Center for Creative Leadership. My team focuses on supporting our K-12, higher education, non-profit, and public health sector initiatives through evaluation and research. I want to share with you our recent experience using pulse surveys to collect feedback from school-wide faculty on a professional development initiative.

Pulse surveys” are short, specific, and actionable surveys intended to collect rapid feedback that is immediately utilized to inform the direction of a program, activity, or culture. Through our partnership with Ravenscroft School, we used a pulse survey midway through a (mandated) year-long professional development experience and timed it so that the pulse feedback would inform the next phase of programming.

We used Waggl, a tool designed for pulse surveys, that has a simple interface to include either yes/no questions, agreement scales, or one open-ended question. A neat feature of Waggl is that it allows for voting as long as the pulse is open, encouraging participants to read the open-ended responses of their peers and vote on them. This way, you can have the most actionable requests filter up to the top based on voting, and it can help drive decisions.

In our case, the Waggl responses directly informed the design of the second phase of training. We also repeated the Waggl toward the end of the school year to quickly see if our program had its intended impact, to provide ideas for a more comprehensive evaluation survey, and to inform the next year of work with the school.

Hot Tips:

  • Keep your pulse survey short! This helps ensure participation. It should be no more than 5-10 questions and take less than a minute or two.
  • Pulse survey results are quick fodder for infographics! Waggl has this functionality built in, but with a little tweaking you could get similar information from a Google Form or other tools.
  • Consider demographic categories that might provide useful ways to cut the data. We looked at differences across school levels and how different cohort groups were responding, which helped our program designers further tailor the training.
  • Pulse surveys build engagement and buy-in…when you use them! Faculty reported feeling very validated by our use of their feedback in the program design. The transparency and openness to feedback by our design team likely increased faculty buy-in for the entire program.

Lesson Learned:

Think outside the box for pulse surveys. Although they are popular with companies for exploring employee engagement, imagine using them with parents at a school, mentors at an after-school program, or even students in a classroom giving feedback to their instructor. There are many possibilities! Any place you want quick, useful feedback would be a great place to add them. In our next phase of work, we are considering training school leaders to send out their own pulse surveys and incorporate the feedback into their practices. Stay tuned!

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hi! My name is Catherine Callow-Heusser, Ph.D., President of EndVision Research and Evaluation. I served as the evaluator of a 5-year Office of Special Education Programs (OSEP) funded personnel preparation grant. The project trained two cohorts of graduate students, each completing a 2-year Master’s level program. When the grant was funded, our first task was to comb the research literature and policy statements to identify the competencies needed for graduates of the program. By the time this was completed, the first cohort of graduate students had nearly completed their first semester of study.

As those students graduated and the next cohort selected to begin the program, we administered a self-report measure of knowledge, skills and dispositions based on the competencies.  For the first cohort, this served as a retrospective pretest as well as a posttest.  For the second cohort, this assessment served as a pretest, and the same survey was administered as a posttest two years later as they graduated. The timeline is shown below.

callow-heusser-timeline

Retrospective pretest and pretest averages across competency categories were quite similar, as were posttest averages. Furthermore, overall pretest averages were 1.23 (standard deviation, sd = 0.40) and 1.35 (sd = 0.47), respectively. Item-level analysis indicated the pretest item averages were strongly and statistically significantly correlated (Pearson-r = 0.79, p < 0.01), and that the Hedge’s g measure of difference between pretest averages for cohorts 1 and 2 was only 0.23, whereas the Hedge’s g measure of difference from pre- to posttest for the two cohorts was 5.3 and 5.6, respectively.

callow-heusser-chart

Rad Resources: There are many publications that provide evidence supporting retrospective surveys, describe the pitfalls, and suggest ways to use them. Here are a few:

Hot Tip #1: Too often, we as evaluators wish we’d collected potentially important baseline data. This analysis shows that given a self-report measure of knowledge and skills, a retrospective pretest provided very similar results to a pretest administered before learning when comparing two cohorts of students. When appropriate, retrospective surveys can provide worthwhile outcome data.

Hot Tip #2: Evaluation plans often evolve over the course of a project. If potentially important baseline data were not collected, consider administering a retrospective survey or self-assessment of knowledge and skills, particularly when data from additional cohorts are available for comparison.

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PreK-12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

·

Hello! We are Monica Hargraves and Miranda Fang, from the Cornell Office for Research on Evaluation.  We presented together at Eval2012 would like to share some practical tips on literature searches in the context of evaluation.

2016 Update: Monica Hargraves is now Associate Director for Evaluation Partnerships at the Cornell Office for Research on Evaluation; Miranda Fang is now Manager, Development Strategy and Operations at Teach For America – Los Angeles

Program managers often face an expectation worthy of Hercules: to provide strong research-quality evidence that their program is effective in producing valuable outcomes. This is daunting, particularly if the valued outcomes only emerge over a long time horizon, the program is new or small, or the appropriate evaluation is way beyond the capacity of the program.  The question is, what can bridge the gap between what’s feasible for the program and what’s needed in terms of evidence?

Hot Tip: Strategic literature searches can help. And visual program logic models provide an ideal framework for organizing the search process.

Quoting our colleagues Jennifer Urban and William Trochim in their AJE 2009 paper on the Golden Spike,

The golden spike is literally a place that can be drawn on the visual causal map … where the evaluation results and the research evidence meet.”

We use pathway models, which build on a columnar logic model and tell the logical story of the program by specifying the connections between the activities and the short-term outcome(s) they each contribute to, and the subsequent short- or mid-term outcome(s) that those lead to, and so on.  What emerges is a visual program theory with links all the way through to the program’s anticipated long-term outcomes.

The visual model organizes and makes succinct the key elements of the program theory. It helps an evaluator to zero in on the particular outcomes and causal links that are needed in order to build credible evidence beyond the scope of their current evaluation.

Here’s an example, from a Cornell Cooperative Extension program on energy conservation in a youth summer camp.  Suppose the program needs to report to a key funder whose interest is in youth careers in the environmental sector. If the program evaluation demonstrates that the program is successful in building a positive attitude towards green energy careers, then a literature search can focus on evidence for the link (where the red star is) between that mid-term outcome and the long-term outcome of an increase in youth entering the green workforce.

The American Evaluation Association is celebrating Best of aea365, an occasional series. The contributions for Best of aea365 are reposts of great blog articles from our earlier years. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

We are Carla Hillerns and Pei-Pei Lei from the Office of Survey Research at the University of Massachusetts Medical School’s Center for Health Policy and Research. We’d like to discuss a common mistake in surveys – double-barreled questions. As the name implies, a double-barreled question asks about two topics, which can lead to issues of interpretation as you’re not sure if the person is responding to the first ‘question’, the second ‘question’ or both. Here is an example:

Was the training session held at a convenient time and location?          Yes          No

A respondent may have different opinions about the time and location of the session but the question only allows for one response. You may be saying to yourself, “I’d never write a question like that!” Yet double barreling is a very easy mistake to make, especially when trying to reduce the overall number of questions on a survey. We’ve spotted double (and even triple) barreled questions in lots of surveys – even validated instruments.

Hot Tips: For Avoiding Double-Barreled Questions:

  1. Prior to writing questions, list the precise topics to be measured. This step might seem like extra work but can actually make question writing easier.
  2. Avoid complicated phrasing. Using simple wording helps identify the topic of the question.
  3. Pay attention to conjunctions like “and” and “or.” A conjunction can be a red flag that your question contains multiple topics.
  4. Ask colleagues to review a working draft of the survey specifically for double-barreled questions (and other design problems). We call this step “cracking the code” because it can be a fun challenge for internal reviewers.
  5. Test the survey. Use cognitive interviews and/or pilot tests to uncover possible problems from the respondent’s perspective. See this AEA365 post for more information on cognitive interviewing.

Rad Resource: Our go-to resource for tips on writing good questions is Internet, phone, mail, and mixed-mode surveys: The tailored design method by Dillman, Smith & Christian.

Lessons Learned:

  1. Never assume. Even when we’re planning on using a previously tested instrument, we still set aside time to review it for potential design problems.
  2. Other evaluators can provide valuable knowledge about survey design. Double-barreled questions are just one of the many common errors in survey design. Other examples include leading questions and double negatives. We hope to see future AEA blogs that offer strategies to tackle these types of problems. Or please consider writing a comment to this post if you have ideas you’d like to share. Thank you!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi, we are Pei-Pei Lei and Carla Hillerns from the Office of Survey Research at the University of Massachusetts Medical School. Have you ever selected an answer to a survey question without reading all of the choices? Maybe you paid more attention to the first choice than the rest? Today, we’d like to share a technique that helps to minimize the impact of these types of scenarios – randomized ordering of survey response options.

Randomizing the order of response options may improve data quality by reducing the order effect in your survey. When there is a list of response options, respondents often have a tendency of selecting the most prominent. For example, in a paper survey, the first option may be most apparent. In a phone survey, the last option may be most memorable. If implementing an online survey, there may be a tendency to choose from the middle of a long list – because the center is more prominent.

By randomizing the order, all options have the same possibility of appearing in each response position. In Example A below, “Direct mail” appears in the top spot. However, in Example B, the responses have been randomly reassigned and “Television” now appears at the top.

Lei

Hot Tips:

  • Do not randomize the order if the response options are better suited to a pre-determined sequence, such as months of the year or alphabetization, or if using a validated instrument that needs to maintain the full survey as developed.
  • If the response list is divided into sub-categories, you can randomize the category order as well as the items within each category.
  • If your list includes “Other (Please specify: __________)” or “None of the above”, keep these at the bottom so the question makes sense!
  • If using the same set of response options for multiple questions, apply the first randomized ordering to the subsequent questions to avoid confusion.
  • Randomization is not a cure for all questionnaire design challenges. For example, respondents probably won’t pay as much attention to each response option if the list is extremely long or the options are excessively wordy. So be reasonable in your design.

Lesson Learned: It’s easy to administer randomization in web and telephone surveys if your survey platform supports this function. A mail survey will require multiple versions of the questionnaire. You’ll also need to account for these multiple versions as part of the data entry process to ensure that responses are coded accurately. 

Rad Resources:

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

My name is Lisa Richardson and I am the internal Improvement Advisor/Evaluator for the UCLA-Duke University National Center for Child Traumatic Stress (NCCTS), which in addition to coordinating the collaborative activities of the National Child Traumatic Stress Network (NCTSN), provides leadership in many aspects of child trauma policy, practice, and training. Online surveys are a favored NCTSN tool, particularly for the collaborative development and evaluation of network products. By last count, over 600 surveys have been done since 2006!

This plethora of surveys has become an unexpected and successful mechanism to enhance evaluation and organizational learning. In the past two years, our evaluation team has taken on very few surveys ourselves and instead given over the process to NCCTS staff and NCTSN groups. We made a previously recommended review process required and increased technical assistance to augment capacity.

Approaching every review as an educational opportunity is the cornerstone to this process. The goal is not only to produce a well-designed survey but also enhance staff member’s ability to create better ones in the future. Coaching builds on staff’s intrinsic passion for working in the child trauma field and for doing collaborative work. Evaluative thinking is reinforced by coaching and shared learning over time.

We have seen the quality of surveys improve tremendously (along with response rates), larger more complicated surveys are being undertaken, and I now receive more queries about using different tools to answer their questions.

Lessons Learned:

  • Put comments in writing and in context. Be clear about required verses suggested changes.
  • Provide alternatives and let the person or group decide. Walk them through the implications of choices and the influence it would have on their survey or data and then get out of the way!
  • Have everyone follow the same rule. My surveys are reviewed as are those developed with input from renowned treatment developers.
  • Build incrementally and use an individualized approach. A well-done survey is still an opportunity for further development.

Rad Resource: Qualtrics , the online survey solution we use is user-friendly and sophisticated. When consulting on technical issues, I often link to technical pages on their excellent website. User Groups allow us to share survey template, questions, messages, and graphics, increasing efficiency and consistency.

The American Evaluation Association is celebrating Organizational Learning and Evaluation Capacity Building (OL-ECB) Topical Interest Group Week. The contributions all this week to aea365 come from our OL-ECB TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Did we get your attention? We hope so. We are Carla Hillerns and Pei-Pei Lei – survey enthusiasts at the Office of Survey Research at the University of Massachusetts Medical School.

An email subject line can be a powerful first impression of an online survey. It has the potential to convince someone to open your email and take your survey. Or it can be dismissed as unimportant or irrelevant. Today’s post offers ideas for creating subject lines that maximize email open rates and survey completion rates.

Hot Tips:

  • Make it compelling – Include persuasive phrasing suited for your target recipients, such as “make your opinion count” and “brief survey.” Research in the marketing world shows that words that convey importance, like “urgent,” can lead to higher open rates.
  • Be clear – Use words that are specific and recognizable to recipients. Mention elements of the study name if they will resonate with respondents but beware of cryptic study names – just because you know what it means doesn’t mean that they will.
  • Keep it short – Many email systems, particularly on mobile devices, display a limited number of characters in the subject line. So don’t exceed 50 characters.
  • Mix it up – Vary your subject line if you are sending multiple emails to the same recipient.
  • Avoid words like “Free Gift” (even if you offer one) – Certain words may cause your email to be labeled as spam.
  • Test it – Get feedback from stakeholders before you finalize the subject line. To go one step further, consider randomly assigning different subject lines to pilot groups to see if there’s a difference in open rates or survey completion rates.

Cool Trick:

  • Personalization – Some survey software systems allow you to merge customized/personalized information into the subject line, such as “Rate your experience with [Medical Practice Name].”

Lesson Learned:

  • Plan ahead for compliance – Make sure that any recruitment materials and procedures follow applicable regulations and receive Institutional Review Board approval if necessary.

Rad Resource:

  • This link provides a list of spam trigger words to avoid.

We’re interested in your suggestions. Please leave a comment if you have a subject line idea that you’d like to share.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi!  This is Andrea Crews-Brown, Tom McKlin, and Brandi Villa with SageFox Consulting Group, a privately owned evaluation firm with offices in Amherst, MA and Atlanta, GA, and Shelly Engelman with the School District of Philadelphia. Today we’d like to share results of a recent survey analysis.

Lessons Learned: Retrospective vs. Traditional Surveying

Evaluators typically implement pre/post surveys to assess the impact a particular program had on its participants. Often, however, pre/post surveys are plagued by multiple challenges:

  1. Participants have little knowledge of the program content and thus leave many items blank.
  2. Participants complete the “pre” survey but do not submit a “post” survey; therefore, it cannot be used for comparison.
  3. Participants’ internal frames of reference change between the pre and post administrations of the survey due to the influence of the intervention. This is often called “response-shift bias.” Howard and colleagues (1979) consistently found that the intervention directly affects the self-report metric between the pre-intervention administration of the instrument and the post-intervention administration.

Retrospective surveys ask participants to compare their attitudes before the program to their attitudes at the end. The retrospective survey addresses most of the challenges that plague traditional pre/post surveys:

  1. Since the survey occurs after the course, participants are more likely to understand the survey items and, therefore, provide more accurate and consistent responses.
  2. Participants can reflect on their growth over time, giving them a more accurate view of their progression.
  3. Participants will take the survey in one sitting which means that the response are more likely to be paired.

Lesson Learned: Response Differences

To analyze response-shift bias, we compared the pre responses on traditional pre/post items measuring confidence to “pre” responses on identical items administered retrospectively on a post survey. When asked about their confidence at the beginning of the course, a mean of 4.47 was reported while on the retrospective survey a value of 3.86 was reported. The students expressed significantly less confidence on the retrospective. A Wilcoxon Signed-Rank Test was used to evaluate the difference in score reporting from traditional pre to retrospective pre. A statistically significant difference (p < .01) was found indicating that the course may have encouraged participants to recalibrate their perceptions of their own confidence.

McKlin

Rad Resource:  Howard has written several great articles on response-shift bias!

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

· · ·

Hello! I am Harlan Luxenberg, the Chief Operating Officer of Professional Data Analysts, Inc. (PDA), a small firm in Minneapolis specializing in public health evaluation. Over the years we have worked on multiple projects where we needed to do one or more of the following:

  • report to a diverse set of stakeholders about program outcomes
  • report on the comparability of outcomes across multiple projects or summative outcome targets
  • create report cards or dashboard-like reports for multi-site evaluations

In our search for a powerful software suite that would allow us to efficiently do these tasks while producing visually appealing reports for our clients, we found Crystal Reports. Crystal Reports is not well known in evaluation circles since it is primarily used in the financial industry. But don’t let that fool you! We have been using it over the past eight years to provide reports to educational institutions, health care providers, and individual stakeholders.

Clipped from http://www54.sap.com/solution/sme/software/analytics/crystal-reports/index.html?URL_ID=crcom

We have found Crystal Reports to be one of the most useful programs in our data visualization toolkit. While the latest version (2011) does cost nearly $500, you can download a free 30 day demo or buy an earlier version online (like 2008) for less than $400. We have even used Crystal Reports in conjunction with LimeSurvey, an open source and completely free online survey tool. To see more about why we love using LimeSurvey and our experience using it in our evaluations, visit our blog posts on it here.

Clipped from http://www.limesurvey.org/

Hot Tip: Create a report template to save time and reduce the potential for errors. To create similar looking reports for different grantees that you are evaluating, simply put the data into a worksheet or database (like Excel). Then connect Crystal to your dataset and you’re ready to create an attractive grantee-specific report that looks similar across grantees. Each report can be developed to only use data from an individual grantee.

Hot Tip: It’s secure! When using fancy dashboard software or even Excel, you often have to give your clients access to your raw data. With Crystal Reports, you can export reports into various formats (like PDF), or your clients can access reports online or through a viewer (both very easy to do). This is especially useful if you are comparing one organization’s data against others and do not want to provide raw aggregate data to everyone.

Resource: If you’d like to see Crystal Reports in action, you can view a template for a standard grantee-specific report that we created for processing Olweus Bullying Prevention Program data.

Lesson Learned: Learning a new software program can be hard! With Crystal Reports, there are extremely helpful online forums where other users will help answer your questions. My favorite is Tek-Tips.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

·

Hi! I’m Silvana Bialosiewicz, an advanced doctoral student at Claremont Graduate University (CGU) and Senior Research Associate at the Claremont Evaluation Center. My goal as an applied researcher is to help develop and disseminate “best-practices” for high-quality evaluation of programs that serve children. Today I’d like to share some strategies for collecting valid and reliable data from young children.

Research on youth-program evaluation and child development reveal that:

  • Children less than nine years old possess limited abilities to accurately self-report, especially by way of written surveys
  • Previously validated measures are not always appropriate for diverse samples of children

Therefore, a critical step in the process of designing evaluations of youth programs is the development and/or choosing of measures that are sensitive to children’s language skills, reading and writing abilities, and life experiences.

Hot Tip: Consider using alternatives to written surveys, such as interviews, when collecting data from children less than nine years old. If written surveys are used, be cognizant of young children’s inability to understand complex questions and accurately recall past experiences. Surveys for young children should be orally administered, use simple language, and use response options that children can easily understand.

Hot Tip: Do not assume that a measure, which has been demonstrated to be valid in a previous study, is appropriate for your participants, especially when the program serves a diverse population of children. The majority of psychological measures for children have been developed and normed on samples of high SES Caucasian children and cannot be assumed to be valid and reliably for diverse samples of children (i.e. English Language Learners, ethnic and cultural minorities, children with physical or sensory disabilities).

Hot Tip: Pilot test your measures, even previously validated measures, before launching full scale data collection to ensure developmental and contextual appropriateness.

Rad Resources: Researching with Children & Young People by Tisdall, Davis, & Gallagher and Through the Eyes of the Child: Obtaining Self-Reports from Children by La Greca are two great books for anyone looking to expand their knowledge on this topic.

Other AEA365 posts on this topic:

Susan Menkes on Constructing Developmentally Sensitive Questions 

Tiffany Berry on Using Developmental Psychology to Promote the Whole Child in Educational Evaluations

Krista Collins and Chad Green on Designing Evaluations with the Whole Child in Mind

The American Evaluation Association is celebrating Ed Eval TIG Week with our colleagues in the PK12 Educational Evaluation Topical Interest Group. The contributions all this week to aea365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

 

· ·

Older posts >>

Archives

To top