Artificial intelligence in breast cancer screening: primary care provider preferences

被引:24
作者
Hendrix, Nathaniel [1 ]
Hauber, Brett [1 ,2 ]
Lee, Christoph, I [3 ,4 ,5 ]
Bansal, Aasthaa [1 ]
Veenstra, David L. [1 ]
机构
[1] Univ Washington, Comparat Hlth Outcomes Policy & Econ CHOICE Inst, Sch Pharm, 1959 NE Pacific St, Seattle, WA 98195 USA
[2] RTI Hlth Solut, Res Triangle Pk, NC USA
[3] Univ Washington, Dept Radiol, Sch Med, Seattle, WA 98195 USA
[4] Univ Washington, Dept Hlth Serv, Sch Publ Hlth, Seattle, WA 98195 USA
[5] Hutchinson Inst Canc Outcomes Res, Seattle, WA USA
基金
美国国家卫生研究院;
关键词
artificial intelligence; breast cancer screening; discrete choice; primary care; conjoint analysis; DISCRETE-CHOICE EXPERIMENTS; MAMMOGRAPHY; PERFORMANCE; DIAGNOSIS; HEALTH; IMPLEMENTATION; TRUST; HARMS; BIAS; AI;
D O I
10.1093/jamia/ocaa292
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Background: Artificial intelligence (AI) is increasingly being proposed for use in medicine, including breast cancer screening (BCS). Little is known, however, about referring primary care providers' (PCPs') preferences for this technology. Methods: We identified the most important attributes of AI BCS for ordering PCPs using qualitative interviews: sensitivity, specificity, radiologist involvement, understandability of AI decision-making, supporting evidence, and diversity of training data. We invited US-based PCPs to participate in an internet-based experiment designed to force participants to trade off among the attributes of hypothetical AI BCS products. Responses were analyzed with random parameters logit and latent class models to assess how different attributes affect the choice to recommend AI-enhanced screening. Results: Ninety-one PCPs participated. Sensitivity was most important, and most PCPs viewed radiologist participation in mammography interpretation as important. Other important attributes were specificity, under-standability of AI decision-making, and diversity of data. We identified 3 classes of respondents: "Sensitivity First" (41%) found sensitivity to be more than twice as important as other attributes; "Against AI Autonomy" (24%) wanted radiologists to confirm every image; "Uncertain Trade-Offs" (35%) viewed most attributes as having similar importance. A majority (76%) accepted the use of AI in a "triage" role that would allow it to filter out likely negatives without radiologist confirmation. Conclusions and Relevance: Sensitivity was the most important attribute overall, but other key attributes should be addressed to produce clinically acceptable products. We also found that most PCPs accept the use of AI to make determinations about likely negative mammograms without radiologist confirmation.
引用
收藏
页码:1117 / 1124
页数:8
相关论文
共 43 条
  • [1] Machine Learning and the Cancer-Diagnosis Problem - No Gold Standard
    Adamson, Adewole S.
    Welch, H. Gilbert
    [J]. NEW ENGLAND JOURNAL OF MEDICINE, 2019, 381 (24) : 2285 - 2287
  • [2] Artificial Intelligence and the Future of Primary Care: Exploratory Qualitative Study of UK General Practitioners' Views
    Blease, Charlotte
    Kaptchuk, Ted J.
    Bernstein, Michael H.
    Mandl, Kenneth D.
    Halamka, John D.
    DesRoches, Catherine M.
    [J]. JOURNAL OF MEDICAL INTERNET RESEARCH, 2019, 21 (03)
  • [3] Modeling Heterogeneity in Patients' Preferences for Psoriasis Treatments in a Multicountry Study: A Comparison Between Random-Parameters Logit and Latent Class Approaches
    Boeri, Marco
    Saure, Daniel
    Schacht, Alexander
    Riedl, Elisabeth
    Hauber, Brett
    [J]. PHARMACOECONOMICS, 2020, 38 (06) : 593 - 606
  • [4] Businesswire, 2019, BUSINESSWIRE
  • [5] Discrete Choice Experiments in Health Economics: A Review of the Literature
    Clark, Michael D.
    Determann, Domino
    Petrou, Stavros
    Moro, Domenico
    de Bekker-Grob, Esther W.
    [J]. PHARMACOECONOMICS, 2014, 32 (09) : 883 - 902
  • [6] Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide
    de Bekker-Grob, Esther W.
    Donkers, Bas
    Jonker, Marcel F.
    Stolk, Elly A.
    [J]. PATIENT-PATIENT CENTERED OUTCOMES RESEARCH, 2015, 8 (05) : 373 - 384
  • [7] Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator
    Diprose, William K.
    Buist, Nicholas
    Hua, Ning
    Thurier, Quentin
    Shand, George
    Robinson, Reece
    [J]. JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2020, 27 (04) : 592 - 600
  • [8] Computed-aided diagnosis (CAD) in the detection of breast cancer
    Dromain, C.
    Boyer, B.
    Ferre, R.
    Canale, S.
    Delaloge, S.
    Balleyguier, C.
    [J]. EUROPEAN JOURNAL OF RADIOLOGY, 2013, 82 (03) : 417 - 423
  • [9] The role of trust in automation reliance
    Dzindolet, MT
    Peterson, SA
    Pomranky, RA
    Pierce, LG
    Beck, HP
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2003, 58 (06) : 697 - 718
  • [10] Influence of computer-aided detection on performance of screening mammography
    Fenton, Joshua J.
    Taplin, Stephen H.
    Carney, Patricia A.
    Abraham, Linn
    Sickles, Edward A.
    D'Orsi, Carl
    Berns, Eric A.
    Cutter, Gary
    Hendrick, R. Edward
    Barlow, William E.
    Elmore, Joann G.
    [J]. NEW ENGLAND JOURNAL OF MEDICINE, 2007, 356 (14) : 1399 - 1409