Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches

被引:15
作者
Kuang, Jinqiu [1 ]
Argo, Lauren [1 ]
Stoddard, Greg [2 ]
Bray, Bruce E. [1 ]
Zeng-Treitler, Qing [1 ,3 ]
机构
[1] Univ Utah, Dept Biomed Informat, Salt Lake City, UT 84108 USA
[2] Univ Utah, Study Design & Biostat Ctr, Salt Lake City, UT 84108 USA
[3] George E Wahlen Dept Vet Affairs Med Ctr, Informat Decis Enhancement & Analyt Sci IDEAS Ctr, Salt Lake City, UT USA
关键词
crowdsourcing; patient discharge summaries; Amazon Mechanical Turk; pictograph recognition; cardiovascular; DISCHARGE INSTRUCTIONS; SIMPLIFICATION; INFORMATION; EMERGENCY; TEXT;
D O I
10.2196/jmir.4582
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Compared to traditional methods of participant recruitment, online crowdsourcing platforms provide a fast and low-cost alternative. Amazon Mechanical Turk (MTurk) is a large and well-known crowdsourcing service. It has developed into the leading platform for crowdsourcing recruitment. Objective: To explore the application of online crowdsourcing for health informatics research, specifically the testing of medical pictographs. Methods: A set of pictographs created for cardiovascular hospital discharge instructions was tested for recognition. This set of illustrations (n=486) was first tested through an in-person survey in a hospital setting (n=150) and then using online MTurk participants (n=150). We analyzed these survey results to determine their comparability. Results: Both the demographics and the pictograph recognition rates of online participants were different from those of the in-person participants. In the multivariable linear regression model comparing the 2 groups, the MTurk group scored significantly higher than the hospital sample after adjusting for potential demographic characteristics (adjusted mean difference 0.18, 95% CI 0.08-0.28, P<.001). The adjusted mean ratings were 2.95 (95% CI 2.89-3.02) for the in-person hospital sample and 3.14 (95% CI 3.07-3.20) for the online MTurk sample on a 4-point Likert scale (1=totally incorrect, 4=totally correct). Conclusions: The findings suggest that crowdsourcing is a viable complement to traditional in-person surveys, but it cannot replace them.
引用
收藏
页数:12
相关论文
共 45 条
[31]   A taxonomy of representation strategies in iconic communication [J].
Nakamura, Carlos ;
Zeng-Treitler, Qing .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2012, 70 (08) :535-551
[32]  
Paolacci G, 2010, JUDGM DECIS MAK, V5, P411
[33]  
Perri S, 2015, J COMMUN HE IN PRESS, pe1
[34]   Crowdsourcing-Harnessing the Masses to Advance Health and Medicine, a Systematic Review [J].
Ranard, Benjamin L. ;
Ha, Yoonhee P. ;
Meisel, Zachary F. ;
Asch, David A. ;
Hill, Shawndra S. ;
Becker, Lance B. ;
Seymour, Anne K. ;
Merchant, Raina M. .
JOURNAL OF GENERAL INTERNAL MEDICINE, 2014, 29 (01) :187-203
[35]  
Ross J., 2010, CHI 10 EXTENDED ABST, P2863
[36]   Crowdsourcing a Normative Natural Language Dataset: A Comparison of Amazon Mechanical Turk and In-Lab Data Collection [J].
Saunders, Daniel R. ;
Bex, Peter J. ;
Woods, Russell L. .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2013, 15 (05)
[37]   Using Mechanical Turk to Study Clinical Populations [J].
Shapiro, Danielle N. ;
Chandler, Jesse ;
Mueller, Pam A. .
CLINICAL PSYCHOLOGICAL SCIENCE, 2013, 1 (02) :213-220
[38]  
Shaw A, 2010, CROWDFLOWER BLO 0805
[39]  
Snow R, 2008, P 2008 C EMP METH NA, P254
[40]   A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory [J].
Sprouse, Jon .
BEHAVIOR RESEARCH METHODS, 2011, 43 (01) :155-167