Hi! This is Miriam Jacobson, a Doctoral Student, and Tarek Azzam, an Associate Professor, at Claremont Graduate University. We are talking today about how to use online crowdsourcing to conduct RoE (Research on Evaluation). Online crowdsourcing is a process of recruiting people online to do specific tasks, such as completing a survey, categorizing information, or translating text. We are currently exploring how access to the “crowd” can contribute to the development of new methods and approaches to evaluation.
Lesson Learned:
- Crowdsourcing allows you to quickly and inexpensively recruit participants for RoE. For example, using Amazon’s MTurk (one of the largest crowdsourcing services) you can post a survey and receive hundreds of participants within a few days, for about $.50 – $2.00 per survey.
- Crowdsourcing also allows you to engage populations that are otherwise difficult to access for RoE studies, such as public constituents.
Hot Tips:
- Consider whether your research is a good fit with participants on MTurk (also commonly called “MTurkers”), who tend to be younger and more educated than the overall public. To further understand who is participating in your study, remember to ask about relevant individual characteristics.
- When recruiting participants, be clear about what the task involves, the time required to complete it, and if applicable, any inclusion criteria for participants.
- Make sure instructions are clear for a range of people— if you aren’t sure, first pilot test the instructions.
- Treat MTurkers fairly—respond to email questions and promptly pay people for completing tasks.
- To increase quality, you can limit those completing your task using specific criteria, such as a minimum approval rating of 95% (this is a measure of satisfaction with their previous work), successful completion of 500+ tasks, and geographic location (currently you can select only countries and US states).
Cool Tricks:
- Use crowdsourcing to:
- Pilot test a survey before administering it to a non-crowdsourced population (e.g., evaluation stakeholders).
- Study the effectiveness of different types of report language or data presentation formats. For example, you can post multiple versions of a report and see which best communicates the intended information.
- Involve MTurkers to operationalize evaluation-related concepts in a way that is understandable and relevant to a broad range of people.
- Engage large groups of people to code qualitative RoE data (e.g., open-ended survey responses, documents or videos) to quickly classify information and get an outside perspective on the data.
The American Evaluation Association is celebrating Research on Evaluation (ROE) Topical Interest Group Week. The contributions all this week to aea365 come from our ROE TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.
This is an interesting post. Thanks for sharing, Miriam and Tarek.