Assessing and overcoming participant dishonesty in online data collection

被引:44
作者
Hydock, Chris [1 ]
机构
[1] Georgetown Univ, Washington, DC 20057 USA
关键词
Sampling; Qualification; MTurk; Online participants; Participant honesty; AMAZON MECHANICAL TURK; WORKERS;
D O I
10.3758/s13428-017-0984-5
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
Crowdsourcing services, such as MTurk, have opened a large pool of participants to researchers. Unfortunately, it can be difficult to confidently acquire a sample that matches a given demographic, psychographic, or behavioral dimension. This problem exists because little information is known about individual participants and because some participants are motivated to misrepresent their identity with the goal of financial reward. Despite the fact that online workers do not typically display a greater than average level of dishonesty, when researchers overtly request that only a certain population take part in an online study, a nontrivial portion misrepresent their identity. In this study, a proposed system is tested that researchers can use to quickly, fairly, and easily screen participants on any dimension. In contrast to an overt request, the reported system results in significantly fewer (near zero) instances of participant misrepresentation. Tests for misrepresentations were conducted by using a large database of past participant records (45,000 unique workers). This research presents and tests an important tool for the increasingly prevalent practice of online data collection.
引用
收藏
页码:1563 / 1567
页数:5
相关论文
共 23 条
  • [1] Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk
    Berinsky, Adam J.
    Huber, Gregory A.
    Lenz, Gabriel S.
    [J]. POLITICAL ANALYSIS, 2012, 20 (03) : 351 - 368
  • [2] HUMAN SUBJECT RESEARCH Social Science for Pennies
    Bohannon, John
    [J]. SCIENCE, 2011, 334 (6054) : 307 - 307
  • [3] Work experiences on MTurk: Job satisfaction, turnover, and information sharing
    Brawley, Alice M.
    Pury, Cynthia L. S.
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2016, 54 : 531 - 546
  • [4] Amazon's Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data?
    Buhrmester, Michael
    Kwang, Tracy
    Gosling, Samuel D.
    [J]. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE, 2011, 6 (01) : 3 - 5
  • [5] Nonnaivete among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers
    Chandler, Jesse
    Mueller, Pam
    Paolacci, Gabriele
    [J]. BEHAVIOR RESEARCH METHODS, 2014, 46 (01) : 112 - 130
  • [6] Lie for a Dime: When Most Prescreening Responses Are Honest but Most Study Participants Are Impostors
    Chandler, Jesse J.
    Paolacci, Gabriele
    [J]. SOCIAL PSYCHOLOGICAL AND PERSONALITY SCIENCE, 2017, 8 (05) : 500 - 508
  • [7] Gleibs I. H, 2017, RES METHODS, V49, P1333, DOI [10.3758/s13428-016-0789-y, DOI 10.3758/S13428-016-0789-Y]
  • [8] Data Collection in a Flat World: The Strengths and Weaknesses of Mechanical Turk Samples
    Goodman, Joseph K.
    Cryder, Cynthia E.
    Cheema, Amar
    [J]. JOURNAL OF BEHAVIORAL DECISION MAKING, 2013, 26 (03) : 213 - 224
  • [9] Hitlin P., 2016, Research in the Crowdsourcing Age, a Case Study.
  • [10] The online laboratory: conducting experiments in a real labor market
    Horton, John J.
    Rand, David G.
    Zeckhauser, Richard J.
    [J]. EXPERIMENTAL ECONOMICS, 2011, 14 (03) : 399 - 425