Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches

被引:15
作者
Kuang, Jinqiu [1 ]
Argo, Lauren [1 ]
Stoddard, Greg [2 ]
Bray, Bruce E. [1 ]
Zeng-Treitler, Qing [1 ,3 ]
机构
[1] Univ Utah, Dept Biomed Informat, Salt Lake City, UT 84108 USA
[2] Univ Utah, Study Design & Biostat Ctr, Salt Lake City, UT 84108 USA
[3] George E Wahlen Dept Vet Affairs Med Ctr, Informat Decis Enhancement & Analyt Sci IDEAS Ctr, Salt Lake City, UT USA
关键词
crowdsourcing; patient discharge summaries; Amazon Mechanical Turk; pictograph recognition; cardiovascular; DISCHARGE INSTRUCTIONS; SIMPLIFICATION; INFORMATION; EMERGENCY; TEXT;
D O I
10.2196/jmir.4582
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Compared to traditional methods of participant recruitment, online crowdsourcing platforms provide a fast and low-cost alternative. Amazon Mechanical Turk (MTurk) is a large and well-known crowdsourcing service. It has developed into the leading platform for crowdsourcing recruitment. Objective: To explore the application of online crowdsourcing for health informatics research, specifically the testing of medical pictographs. Methods: A set of pictographs created for cardiovascular hospital discharge instructions was tested for recognition. This set of illustrations (n=486) was first tested through an in-person survey in a hospital setting (n=150) and then using online MTurk participants (n=150). We analyzed these survey results to determine their comparability. Results: Both the demographics and the pictograph recognition rates of online participants were different from those of the in-person participants. In the multivariable linear regression model comparing the 2 groups, the MTurk group scored significantly higher than the hospital sample after adjusting for potential demographic characteristics (adjusted mean difference 0.18, 95% CI 0.08-0.28, P<.001). The adjusted mean ratings were 2.95 (95% CI 2.89-3.02) for the in-person hospital sample and 3.14 (95% CI 3.07-3.20) for the online MTurk sample on a 4-point Likert scale (1=totally incorrect, 4=totally correct). Conclusions: The findings suggest that crowdsourcing is a viable complement to traditional in-person surveys, but it cannot replace them.
引用
收藏
页数:12
相关论文
共 45 条
[1]   The Effect of Social Support Features and Gamification on a Web-Based Intervention for Rheumatoid Arthritis Patients: Randomized Controlled Trial [J].
Allam, Ahmed ;
Kostova, Zlatina ;
Nakamoto, Kent ;
Schulz, Peter Johannes .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2015, 17 (01) :e14
[2]  
[Anonymous], 2008, P SIGCHI C HUM FACT
[3]   DISCHARGE INSTRUCTIONS - DO ILLUSTRATIONS HELP OUR PATIENTS UNDERSTAND THEM [J].
AUSTIN, PE ;
MATLACK, R ;
DUNN, KA ;
KESLER, C ;
BROWN, CK .
ANNALS OF EMERGENCY MEDICINE, 1995, 25 (03) :317-320
[4]   Finding a Comparison Group: Is Online Crowdsourcing a Viable Option? [J].
Azzam, Tarek ;
Jacobson, Miriam R. .
AMERICAN JOURNAL OF EVALUATION, 2013, 34 (03) :372-384
[5]   The viability of crowdsourcing for survey research [J].
Behrend, Tara S. ;
Sharek, David J. ;
Meade, Adam W. ;
Wiebe, Eric N. .
BEHAVIOR RESEARCH METHODS, 2011, 43 (03) :800-813
[6]   Rapid Grading of Fundus Photographs for Diabetic Retinopathy Using Crowdsourcing [J].
Brady, Christopher J. ;
Villanti, Andrea C. ;
Pearson, Jennifer L. ;
Kirchner, Thomas R. ;
Gupta, Omesh P. ;
Shah, Chirag P. .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2014, 16 (10) :175-184
[7]  
Bui Duy, 2012, AMIA Annu Symp Proc, V2012, P1158
[8]  
Callison-Burch C., 2009, Conference on Empirical Methods in Natural Language Processing, V1, P286
[9]   Breaking monotony with meaning: Motivation in crowdsourcing markets [J].
Chandler, Dana ;
Kapelner, Adam .
JOURNAL OF ECONOMIC BEHAVIOR & ORGANIZATION, 2013, 90 :123-133
[10]   Evaluation of a Novel Conjunctive Exploratory Navigation Interface for Consumer Health Information: A Crowdsourced Comparative Study [J].
Cui, Licong ;
Carter, Rebecca ;
Zhang, Guo-Qiang .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2014, 16 (02)