Assessing and Improving Data Integrity in Web-Based Surveys: Comparison of Fraud Detection Systems in a COVID-19 Study

被引:15
作者
Bonett, Stephen [1 ,3 ]
Lin, Willey [1 ]
Topper, Patrina Sexton [1 ]
Wolfe, James [1 ]
Golinkoff, Jesse [1 ]
Deshpande, Aayushi [2 ]
Villarruel, Antonia [1 ]
Bauermeister, Jose [1 ]
机构
[1] Univ Penn, Sch Nursing, Philadelphia, PA USA
[2] Ashoka Univ, Dept Psychol, Sonepat, India
[3] Univ Penn, Sch Nursing, 418 Curie Blvd, Philadelphia, PA 19104 USA
基金
美国国家卫生研究院;
关键词
web-based survey; data quality; fraud; survey methodology; COVID-19; survey; fraud detection; Philadelphia; data privacy; data protection; privacy; security; data; information security; data validation; cross-sectional; web-based; ONLINE;
D O I
10.2196/47091
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Web-based surveys increase access to study participation and improve opportunities to reach diverse populations. However, web-based surveys are vulnerable to data quality threats, including fraudulent entries from automated bots and duplicative submissions. Widely used proprietary tools to identify fraud offer little transparency about the methods used, effectiveness, or representativeness of resulting data sets. Robust, reproducible, and context-specific methods of accurately detecting fraudulent responses are needed to ensure integrity and maximize the value of web-based survey research. Objective: This study aims to describe a multilayered fraud detection system implemented in a large web-based survey about COVID-19 attitudes, beliefs, and behaviors; examine the agreement between this fraud detection system and a proprietary fraud detection system; and compare the resulting study samples from each of the 2 fraud detection methods. Methods: The PhillyCEAL Common Survey is a cross-sectional web-based survey that remotely enrolled residents ages 13 years and older to assess how the COVID-19 pandemic impacted individuals, neighborhoods, and communities in Philadelphia, Pennsylvania. Two fraud detection methods are described and compared: (1) a multilayer fraud detection strategy developed by the research team that combined automated validation of response data and real-time verification of study entries by study personnel and (2) the proprietary fraud detection system used by the Qualtrics (Qualtrics) survey platform. Descriptive statistics were computed for the full sample and for responses classified as valid by 2 different fraud detection methods, and classification tables were created to assess agreement between the methods. The impact of fraud detection methods on the distribution of vaccine confidence by racial or ethnic group was assessed. Results: Of 7950 completed surveys, our multilayer fraud detection system identified 3228 (40.60%) cases as valid, while the Qualtrics fraud detection system identified 4389 (55.21%) cases as valid. The 2 methods showed only "fair" or "minimal" agreement in their classifications (kappa=0.25; 95% CI 0.23-0.27). The choice of fraud detection method impacted the distribution of vaccine confidence by racial or ethnic group. Conclusions: The selection of a fraud detection method can affect the study's sample composition. The findings of this study, while not conclusive, suggest that a multilayered approach to fraud detection that includes conservative use of automated fraud detection and integration of human review of entries tailored to the study's specific context and its participants may be warranted for future survey research.
引用
收藏
页数:14
相关论文
共 47 条
[31]   Interrater reliability: the kappa statistic [J].
McHugh, Mary L. .
BIOCHEMIA MEDICA, 2012, 22 (03) :276-282
[32]   Pitfalls, Potentials, and Ethics of Online Survey Research: LGBTQ and Other Marginalized and Hard-to-Access Youths [J].
McInroy, Lauren B. .
SOCIAL WORK RESEARCH, 2016, 40 (02) :83-93
[33]   The Digital Divide in Health-Related Technology Use: The Significance of Race/Ethnicity [J].
Mitchell, Uchechi A. ;
Chebli, Perla G. ;
Ruggiero, Laurie ;
Muramatsu, Naoko .
GERONTOLOGIST, 2019, 59 (01) :6-14
[34]   Random responding from participants is a threat to the validity of social science research results [J].
Osborne, Jason W. ;
Blanchard, Margaret R. .
FRONTIERS IN PSYCHOLOGY, 2011, 2
[35]  
Perkel JM, 2020, NATURE, V579, P461, DOI 10.1038/d41586-020-00768-0
[36]   Threats of Bots and Other Bad Actors to Data Quality Following Research Participant Recruitment Through Social Media: Cross-Sectional Questionnaire [J].
Pozzar, Rachel ;
Hammer, Marilyn J. ;
Underhill-Blazey, Meghan ;
Wright, Alexi A. ;
Tulsky, James A. ;
Hong, Fangxin ;
Gundersen, Daniel A. ;
Berry, Donna L. .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2020, 22 (10)
[37]  
Safari privacy overview, 2019, Learn how the Safari web browser protects your privacy
[38]   Are Your Participants Real? Dealing with Fraud in Recruiting Older Adults Online [J].
Salinas, Margaret R. .
WESTERN JOURNAL OF NURSING RESEARCH, 2023, 45 (01) :93-99
[39]   Using a consistency check during data collection to identify invalid responding in an online cannabis screening survey [J].
Schell, Christina ;
Godinho, Alexandra ;
Cunningham, John A. .
BMC MEDICAL RESEARCH METHODOLOGY, 2022, 22 (01)
[40]  
Schwab K, 2019, FastCompany