Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

Tarek Azzam on Using Crowdsourcing in Evaluation Practice

I am Tarek Azzam, assistant professor at Claremont Graduate University and associate director of the Claremont Evaluation Center.

Today I want to talk about crowdsourcing and how it can potentially be used in evaluation practice. Generally speaking, crowdsourcing is the process of using the power of the many individuals (i.e. the crowd) to accomplish specific tasks. This idea has been around for a long time (e.g. the creation of the oxford dictionary), but due to recent developments in technology, the ability to access the power of the crowd has become much easier.

I will focus on just one crowdsourcing website because Amazon’s Mechanical Turk is the most widely known, used, and studied crowdsourcing site. This site helps to facilitate the interactions between “requesters” and “workers” (see figures below). A requester can describe a task (e.g. please complete a survey), set the payment and allotted time for completing a task, and determine the qualifications needed to finish the task. This information is then posted on MTurk website, and interested individuals who qualify can complete the task for the promised payment.

This facilitated marketplace has some really interesting implications for evaluation practice. For example, evaluators can use MTurk to establish the validity and reliability of survey instruments before giving them to intended participants. By posting a survey on MTurk and collecting responses from individuals with similar background characteristics as your intended participants, an evaluator can establish the reliability of a measure, get feedback on the items, and if needed translate the items into another language. All this can be accomplished in a matter of days. For me personally I’ve been able to collect 500 responses for a 15 minute survey, at a cost of 55 cents per survey in less than three days.

Hot Tip: when selecting the eligibility criteria for MTurk participants choose those with 95% or higher approval ratings.

There are other uses that I am currently experimenting with. For example:

  • Can MTurk respondents be used to create a matched comparison group in evaluation studies?
  • Is it possible to use MTurk respondents in a matched group pre-post design?
  • Is it possible to use MTurk to help with the analysis and coding of qualitative data?

These are things that are yet to be known but I will keep you updated as we progress in exploring the limits of crowdsourcing in evaluation practice.

Hot Tip: I will be presenting a Coffee Break Demonstration (free for American Evaluation Association (AEA) members) on Crowdsourcing on Thursday April 18, 2013 from 2:00-2:20pm EDT. Hope to see you there.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

 

1 thought on “Tarek Azzam on Using Crowdsourcing in Evaluation Practice”

  1. Thanks Tarek, for sharing your good ideas and the creative ways you’re using the site. My first question was “how reliable would folks be as survey testers?” but the approval ratings you mention would seem to cut the risk substantially. Cheers, jj

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.