Hello! We are Laura Beals and Barbara Perry, internal evaluators at Jewish Family & Children’s Service, a large social-service agency located outside of Boston, MA. We have been gathering feedback from clients and other constituents via surveys, often online, for many years, but we recently completed a needs assessment project in which we also wanted to gather information from potential clients, their families, and professionals. One way we attempted to reach these new populations was through social media (via the agency’s Facebook page) and a Massachusetts non-profit email listserv.
In our invitation to participate in the survey—the Facebook post or the email invitation to the listserv—we clearly but briefly outlined who we were looking for (the target population), what we were asking them to do, what the incentive was, and how to access the link. In the introduction to the survey itself, we more thoroughly outlined participation expectations and indicated that we would (snail) mail the first 100 participants who were interested a $5 gift card to Dunkin Donuts. We had decided to use a physical gift card, instead of an electronic gift card, and we are so glad that we did…because about two weeks after our first survey completions from the Facebook and listserv posts, we started to get back envelopes of gift cards with bright yellow “return to sender” stickers.
This prompted us to immediately take a closer look at these responses—we found email addresses that didn’t match names provided in the mailing address (not that they have to, but it was suspicious), many email addresses had random numbers at the end, all had out-of-state mailing addresses, searching on whitepages.com suggested these likely were fake people, they were all in the same survey collector, and the content of their responses were not in alignment with those that we knew from real people (e.g., they completed the closed-ended questions, but none of the open-ended). Luckily, we were able to determine a pattern to these fraudulent responses and eliminate them from our analysis. We had planned to look at our responses for duplicates (email address, IP addresses if known, etc.), but had not thought about reviewing for fraud!
Lessons Learned:
As more-and-more evaluators may be turning to social media to gather survey responses, especially for youth, we wanted to share three lessons learned from our experiences:
- Use a unique collector for each distribution type. We use SurveyMonkey as our surveying tool, and it gives us the ability to create distinct links for each type of distribution (email to clients, post to Facebook, email to listserv, etc.). This allowed us to pinpoint which collector was the culprit.
- Read “Detecting, Preventing, and Responding to “Fraudsters” in Internet Research: Ethics and Tradeoffs.” The authors provide a table of “Methods of Detecting and Preventing Internet Study Duplication and Fraud and Their Implications.”
- Include the step of examining your responses for signs of fraud before beginning analysis.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
Thank you for your tips and considerations when using social media and online surveys to collect data. As an administrator at an elementary school, I often use Google Forms to reach out to students as it ensures accessibility for all students regardless of age, academic ability or reading level. For parental correspondence and input, we have moved to Facebook as we were finding that parents were not wanting paper copies of things sent home and most often, the information was not returned. With Facebook, we found that the majority of information made it to the parents and our responses when gathering information increased. What I did not consider was that the data needed to be closely monitored due to the issues you mentioned in your article. These are all pieces that will need to be considered when we put out our next School Climate Survey.
As someone very new to the world of evaluation, this was very helpful. If there are any other hints that you could share itw ould be much appreciated.
Hello Laura and Barbara,
Thank you for your post, I found it quite intriguing.
I am currently taking my first course related to Program Inquiry and Evaluation, and have no other background in this field. I am enjoying the course and learning so much about evaluation and its use, that I had previously never considered. Recently in class, we have considered dilemmas in evaluation use, including such topics as influence, bias and misuse. Seeing your title caught my eye!
Naively, I had never even considered the idea of fraud as being one of the factors that one needs to consider during an evaluation. In this world of ever changing social media platforms and improving technologies, do you feel it is still possible to combat the effects of fraud and still gain accurate and valuable data for evaluation? Or, do you fear that fraudulent scammers and/or phishers may degrade the process and make it undesirable for participants and/or evaluators? I look forward to your insights!
Thank you very much,
Kate
Hi Laura and Barbara, thank you for sharing your experiences of data collection through social media and online surveys. As an educator, we are moving to gather data in this method more and more. I find that both parents and children are more apt to engage if materials are presented in an electronic format however, never considered the possibility of fraudulent responses whether it be to receive the incentive or to skew the data collected. From my newly acquired understanding of program evaluation, this seems to point to the importance of collecting data that is both primary and secondary in nature as well as a balance of quantitative and qualitative data. Would reaching out to a more targeted and controlled group provide an evaluator with the same depth of data?
Hello Tom,
As a person newly informed about the area of evaluation I found your post on the New Directions for Evaluation quite interesting. I see a parallel between this for evaluators and for me as a high school educator. Every day as an educator we collect data and analyze facts about our students for the purpose of reporting. These quantitative measures however do not mean anything without the “insights” of the teacher. These insights are based on our background knowledge and experience when dealing with students, their context, and our prior knowledge and theory gained in our profession. How we bring out the best in our students requires this thinking part. I believe that this is a similar concept to Evaluative Thinking in your profession, I am interested in your thoughts on this parallel.
Ryan Johannson