Participant Carelessness and Fraud: Consequences for Clinical Research and Potential Solutions

被引:80
作者
Chandler, Jesse [1 ,2 ]
Sisso, Itay [3 ]
Shapiro, Danielle [4 ]
机构
[1] Univ Michigan, Inst Social Res, Ann Arbor, MI USA
[2] Mathematica Policy Res, 220 East Huron St,Suite 300, Ann Arbor, MI 48103 USA
[3] Hebrew Univ Jerusalem, Federmann Ctr Study Rat, Jerusalem, Israel
[4] Univ Michigan, Michigan Med, Dept Phys Med & Rehabil, Ann Arbor, MI 48109 USA
关键词
MTurk; data quality; fraud; MECHANICAL TURK; DATA QUALITY; VALIDITY; INVENTORY; RESPONSES; MISREPRESENTATION; RELIABILITY; WORKERS; VERSION; ONLINE;
D O I
10.1037/abn0000479
中图分类号
B849 [应用心理学];
学科分类号
040203 ;
摘要
Clinical psychological research studies often require individuals with specific characteristics. The Internet can be used to recruit broadly, enabling the recruitment of rare groups such as people with specific psychological disorders. However, Internet-based research relies on participant self-report to determine eligibility, and thus, data quality depends on participant honesty. For those rare groups. even low levels of participant dishonesty can lead to a substantial proportion of fraudulent survey responses, and all studies will include careless respondents who do not pay attention to questions, do not understand them, or provide intentionally wrong responses. Poor-quality responses should be thought of as categorically different from high-quality responses. Including these responses will lead to the overestimation of the prevalence of rare groups and incorrect estimates of scale reliability. means, and correlations between constructs. We demonstrate that for these reasons, including poor-quality responses-which are usually positively skewed-will lead to several data-quality problems including spurious associations between measures. We provide recommendations about how to ensure that fraudulent participants are detected and excluded from self-report research studies.
引用
收藏
页码:49 / 55
页数:7
相关论文
共 50 条
  • [1] Development of a Brief Version of the Social Phobia Inventory Using Item Response Theory: The Mini-SPIN-R
    Aderka, Idan M.
    Pollack, Mark H.
    Simon, Naomi M.
    Smits, Jasper A. J.
    Van Ameringen, Michael
    Stein, Murray B.
    Hofmann, Stefan G.
    [J]. BEHAVIOR THERAPY, 2013, 44 (04) : 651 - 661
  • [2] [Anonymous], TAPPED OUT BARELY TA
  • [3] Using Mechanical Turk for research on cancer survivors
    Arch, Joanna J.
    Carr, Alaina L.
    [J]. PSYCHO-ONCOLOGY, 2017, 26 (10) : 1593 - 1603
  • [4] Bai M, 2018, EVIDENCE LARGE AMOUN
  • [5] OUTCOME BIAS IN DECISION EVALUATION
    BARON, J
    HERSHEY, JC
    [J]. JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1988, 54 (04) : 569 - 579
  • [6] The Registration Continuum in Clinical Science: A Guide Toward Transparent Practices
    Benning, Stephen D.
    Bachrach, Rachel L.
    Smith, Edward A.
    Freeman, Andrew J.
    Wright, Aidan G. C.
    [J]. JOURNAL OF ABNORMAL PSYCHOLOGY, 2019, 128 (06) : 528 - 540
  • [7] Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys
    Berinsky, Adam J.
    Margolis, Michele F.
    Sances, Michael W.
    [J]. AMERICAN JOURNAL OF POLITICAL SCIENCE, 2014, 58 (03) : 739 - 753
  • [8] INITIAL RELIABILITY AND VALIDITY OF A NEW RETROSPECTIVE MEASURE OF CHILD-ABUSE AND NEGLECT
    BERNSTEIN, DP
    FINK, L
    HANDELSMAN, L
    FOOTE, J
    LOVEJOY, M
    WENZEL, K
    SAPARETO, E
    RUGGIERO, J
    [J]. AMERICAN JOURNAL OF PSYCHIATRY, 1994, 151 (08) : 1132 - 1136
  • [9] A DAILY STRESS INVENTORY - DEVELOPMENT, RELIABILITY, AND VALIDITY
    BRANTLEY, PJ
    WAGGONER, CD
    JONES, GN
    RAPPAPORT, NB
    [J]. JOURNAL OF BEHAVIORAL MEDICINE, 1987, 10 (01) : 61 - 74
  • [10] The Impact of Insufficient Effort Responding Detection Methods on Substantive Responses: Results from an Experiment Testing Parameter Invariance
    Breitsohl, Heiko
    Steidelmueller, Corinna
    [J]. APPLIED PSYCHOLOGY-AN INTERNATIONAL REVIEW-PSYCHOLOGIE APPLIQUEE-REVUE INTERNATIONALE, 2018, 67 (02): : 284 - 308