Quality control questions on Amazon's Mechanical Turk (MTurk): A randomized trial of impact on the USAUDIT, PHQ-9, and GAD-7

被引:84
作者
Agley, Jon [1 ]
Xiao, Yunyu [2 ,3 ]
Nolan, Rachael [4 ]
Golzarri-Arroyo, Lilian [5 ]
机构
[1] Indiana Univ, Sch Publ Hlth Bloomington, Dept Appl Hlth Sci, Prevent Insights, 809 E 9th St, Bloomington, IN 47405 USA
[2] Indiana Univ, Sch Social Work, Bloomington, IN 47405 USA
[3] Indiana Univ Purdue Univ Indianapolis IUPUI, Sch Social Work, Bloomington, IN USA
[4] Univ Cincinnati, Coll Med, Dept Environm & Publ Hlth Sci, Cincinnati, OH USA
[5] Indiana Univ, Biostat Consulting Ctr, Sch Publ Hlth Bloomington, Bloomington, IN USA
关键词
data quality; crowdsourced sampling; MTurk; reproducibility; VALIDITY; WORKERS;
D O I
10.3758/s13428-021-01665-8
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
Crowdsourced psychological and other biobehavioral research using platforms like Amazon's Mechanical Turk (MTurk) is increasingly common - but has proliferated more rapidly than studies to establish data quality best practices. Thus, this study investigated whether outcome scores for three common screening tools would be significantly different among MTurk workers who were subject to different sets of quality control checks. We conducted a single-stage, randomized controlled trial with equal allocation to each of four study arms: Arm 1 (Control Arm), Arm 2 (Bot/VPN Check), Arm 3 (Truthfulness/Attention Check), and Arm 4 (Stringent Arm - All Checks). Data collection was completed in Qualtrics, to which participants were referred from MTurk. Subjects (n = 1100) were recruited on November 20-21, 2020. Eligible workers were required to claim U.S. residency, have a successful task completion rate > 95%, have completed a minimum of 100 tasks, and have completed a maximum of 10,000 tasks. Participants completed the US-Alcohol Use Disorders Identification Test (USAUDIT), the Patient Health Questionnaire (PHQ-9), and a screener for Generalized Anxiety Disorder (GAD-7). We found that differing quality control approaches significantly, meaningfully, and directionally affected outcome scores on each of the screening tools. Most notably, workers in Arm 1 (Control) reported higher scores than those in Arms 3 and 4 for all tools, and a higher score than workers in Arm 2 for the PHQ-9. These data suggest that the use, or lack thereof, of quality control questions in crowdsourced research may substantively affect findings, as might the types of quality control items.
引用
收藏
页码:885 / 897
页数:13
相关论文
共 50 条
[1]  
Adesida P. O., 2020, PUBLICATIONS, V8178
[2]  
Agley J., 2020, **DATA OBJECT**, DOI 10.17605/OSF.IO/SV9EA
[3]   Misinformation about COVID-19: evidence for differential latent profiles and a strong association with trust in science [J].
Agley, Jon ;
Xiao, Yunyu .
BMC PUBLIC HEALTH, 2021, 21 (01)
[4]   MTurk Research: Review and Recommendations [J].
Aguinis, Herman ;
Villamor, Isabel ;
Ramani, Ravi S. .
JOURNAL OF MANAGEMENT, 2021, 47 (04) :823-837
[5]  
Amazon.com, 2020, AM MECH TURK ACC GLO
[6]  
Angus D.J., 2021, PSYCHOL ADDICT BEHAV, DOI [10.1037/adb0000687, DOI 10.1037/ADB0000687]
[7]  
[Anonymous], RAND
[8]  
[Anonymous], 2018, PROLIFIC
[9]   The Importance of Assessing Clinical Phenomena in Mechanical Turk Research [J].
Arditte, Kimberly A. ;
Cek, Demet ;
Shaw, Ashley M. ;
Timpano, Kiara R. .
PSYCHOLOGICAL ASSESSMENT, 2016, 28 (06) :684-691
[10]   Noncompliant responding: Comparing exclusion criteria in MTurk personality research to improve data quality [J].
Barends, Ard J. ;
de Vries, Reinout E. .
PERSONALITY AND INDIVIDUAL DIFFERENCES, 2019, 143 :84-89