Welcome to aea365! Please take a moment to review our new community guidelines. Learn More.

SIOP Week: Elizabeth Rupprecht on Using mTurk to Collect Evaluation Data

Hello! I’m Elizabeth Rupprecht, an Industrial Organizational Psychology graduate student at Saint Louis University.  I would like to tell you more about a great resource for collecting national or international evaluation data—Amazon’s Mechanical Turk (mTurk). mTurk is normally used to provide organizations with assistance completing tasks. Typically, an organization will set up a “task,” such as transcribing one minute of audio. Then, mTurk posts this task for any interested mTurk “workers” to complete. After the organization reviews the work done, the worker is paid between one cent to a dollar, depending on the complexity and length of the task. In a recent article, researchers noted that mTurk provides I/O psychologists with a large and diverse sample of working adults from across the country for research on topics such as: crowdsourcing, decision-making, and leadership (Buhrmeister et al., 2011).  mTurk could also be useful for evaluations needing sizable and diverse samples. For example, in the case of policy analysis, mTurk could be used to read the pulse of American voters on specific governmental policies. For consumer-oriented evaluation, mTurk could be used to help researchers obtain a convenient, diverse, and large sample of consumers to assess products or services.

Rad Resource: Even though mTurk may seem too good to be true, research published in Judgment and Decision-Making has found that the range of participants found on mTurk are representative of the US population of Internet users. In addition, 70-80% of users are from the US (Paolacci et al., 2010).

Cool Trick: mTurk has its own survey tools, but it allows you to add a link to an external assessment tool, which increases speed and allows for advanced functionality—such as the ability to export directly into third-party statistics programs (SPSS, SAS, Excel, etc).

Hot Tip: As my colleague, Lacie Barber, discussed in her aea365 contribution, implementing quality control checks in surveys can help improve the quality of data. In my experience using mTurk, I have found that specification of your target population is necessary both in the mTurk advertisement/recruitment statement for the “workers” and in the actual survey. Weeding out participants who overlook your specifications in the advertisement is vital! If the “workers” do not follow your specifications, or do not complete their “task,” (i.e. your survey) you do not need to pay them.

Only time will tell if mTurk becomes a highly used engine for social science and evaluation research, but at this moment, it seems like the hot new type of convenience sample!

Buhrmeister, M., et al. (2011). Amazon’s mechanical turk: A new source of inexpensive, yet high-quality data? Perspectives on Psychological Science, 6(3). 3-5.

Paolacci, G., et al. (2010). Running experiments on Amazon mechanical turk. Judgment and Decision Making, 5(5). 411-419.

The American Evaluation Association is celebrating Society for Industrial & Organizational Psychology (SIOP) Week with our SIOP colleagues. The contributions all this week to aea365 come from our SIOP members and you may wish to consider subscribing to our weekly headlines and resources listwhere we’ll be highlighting SIOP resources. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.