Artificial intelligence in breast cancer screening: primary care provider preferences

被引:24
作者
Hendrix, Nathaniel [1 ]
Hauber, Brett [1 ,2 ]
Lee, Christoph, I [3 ,4 ,5 ]
Bansal, Aasthaa [1 ]
Veenstra, David L. [1 ]
机构
[1] Univ Washington, Comparat Hlth Outcomes Policy & Econ CHOICE Inst, Sch Pharm, 1959 NE Pacific St, Seattle, WA 98195 USA
[2] RTI Hlth Solut, Res Triangle Pk, NC USA
[3] Univ Washington, Dept Radiol, Sch Med, Seattle, WA 98195 USA
[4] Univ Washington, Dept Hlth Serv, Sch Publ Hlth, Seattle, WA 98195 USA
[5] Hutchinson Inst Canc Outcomes Res, Seattle, WA USA
基金
美国国家卫生研究院;
关键词
artificial intelligence; breast cancer screening; discrete choice; primary care; conjoint analysis; DISCRETE-CHOICE EXPERIMENTS; MAMMOGRAPHY; PERFORMANCE; DIAGNOSIS; HEALTH; IMPLEMENTATION; TRUST; HARMS; BIAS; AI;
D O I
10.1093/jamia/ocaa292
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Background: Artificial intelligence (AI) is increasingly being proposed for use in medicine, including breast cancer screening (BCS). Little is known, however, about referring primary care providers' (PCPs') preferences for this technology. Methods: We identified the most important attributes of AI BCS for ordering PCPs using qualitative interviews: sensitivity, specificity, radiologist involvement, understandability of AI decision-making, supporting evidence, and diversity of training data. We invited US-based PCPs to participate in an internet-based experiment designed to force participants to trade off among the attributes of hypothetical AI BCS products. Responses were analyzed with random parameters logit and latent class models to assess how different attributes affect the choice to recommend AI-enhanced screening. Results: Ninety-one PCPs participated. Sensitivity was most important, and most PCPs viewed radiologist participation in mammography interpretation as important. Other important attributes were specificity, under-standability of AI decision-making, and diversity of data. We identified 3 classes of respondents: "Sensitivity First" (41%) found sensitivity to be more than twice as important as other attributes; "Against AI Autonomy" (24%) wanted radiologists to confirm every image; "Uncertain Trade-Offs" (35%) viewed most attributes as having similar importance. A majority (76%) accepted the use of AI in a "triage" role that would allow it to filter out likely negatives without radiologist confirmation. Conclusions and Relevance: Sensitivity was the most important attribute overall, but other key attributes should be addressed to produce clinically acceptable products. We also found that most PCPs accept the use of AI to make determinations about likely negative mammograms without radiologist confirmation.
引用
收藏
页码:1117 / 1124
页数:8
相关论文
共 43 条
[31]   Dissecting racial bias in an algorithm used to manage the health of populations [J].
Obermeyer, Ziad ;
Powers, Brian ;
Vogeli, Christine ;
Mullainathan, Sendhil .
SCIENCE, 2019, 366 (6464) :447-+
[32]   National Expenditure For False-Positive Mammograms And Breast Cancer Overdiagnoses Estimated At $4 Billion A Year [J].
Ong, Mei-Sing ;
Mandl, Kenneth D. .
HEALTH AFFAIRS, 2015, 34 (04) :576-583
[33]   Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction [J].
Park, Seong Ho ;
Han, Kyunghwa .
RADIOLOGY, 2018, 286 (03) :800-809
[34]   What do senior physicians think about AI and clinical decision support systems: Quantitative and qualitative analysis of data from specialty societies [J].
Petkus, Haroldas ;
Hoogewerf, Jan ;
Wyatt, Jeremy C. .
CLINICAL MEDICINE, 2020, 20 (03) :324-328
[35]   Potential Liability for Physicians Using Artificial Intelligence [J].
Price, W. Nicholson, II ;
Gerke, Sara ;
Cohen, I. Glenn .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2019, 322 (18) :1765-1766
[36]   On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities [J].
Reyes, Mauricio ;
Meier, Raphael ;
Pereira, Sergio ;
Silva, Carlos A. ;
Dahlweid, Fried-Michael ;
Von Tengg-Kobligk, Hendrik ;
Summers, Ronald M. ;
Wiest, Roland .
RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2020, 2 (03)
[37]   "Why Should I Trust You?" Explaining the Predictions of Any Classifier [J].
Ribeiro, Marco Tulio ;
Singh, Sameer ;
Guestrin, Carlos .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :1135-1144
[38]   Can we reduce the workload of mammographic screening by automatic identification of normal exams with artificial intelligence? A feasibility study [J].
Rodriguez-Ruiz, Alejandro ;
Lang, Kristina ;
Gubern-Merida, Albert ;
Teuwen, Jonas ;
Broeders, Mireille ;
Gennaro, Gisella ;
Causer, Paola ;
Helbich, Thomas H. ;
Chevalier, Margarita ;
Mertelmeier, Thomas ;
Wallis, Matthew G. ;
Andersson, Ingvar ;
Zackrisson, Sophia ;
Sechopoulos, Ioannis ;
Mann, Ritse M. .
EUROPEAN RADIOLOGY, 2019, 29 (09) :4825-4832
[39]   Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms [J].
Schaffter, Thomas ;
Buist, Diana S. M. ;
Lee, Christoph, I ;
Nikulin, Yaroslav ;
Ribli, Dezso ;
Guan, Yuanfang ;
Lotter, William ;
Jie, Zequn ;
Du, Hao ;
Wang, Sijia ;
Feng, Jiashi ;
Feng, Mengling ;
Kim, Hyo-Eun ;
Albiol, Francisco ;
Albiol, Alberto ;
Morrell, Stephen ;
Wojna, Zbigniew ;
Ahsen, Mehmet Eren ;
Asif, Umar ;
Yepes, Antonio Jimeno ;
Yohanandan, Shivanthan ;
Rabinovici-Cohen, Simona ;
Yi, Darvin ;
Hoff, Bruce ;
Yu, Thomas ;
Neto, Elias Chaibub ;
Rubin, Daniel L. ;
Lindholm, Peter ;
Margolies, Laurie R. ;
McBride, Russell Bailey ;
Rothstein, Joseph H. ;
Sieh, Weiva ;
Ben-Ari, Rami ;
Harrer, Stefan ;
Trister, Andrew ;
Friend, Stephen ;
Norman, Thea ;
Sahiner, Berkman ;
Strand, Fredrik ;
Guinney, Justin ;
Stolovitzky, Gustavo .
JAMA NETWORK OPEN, 2020, 3 (03) :E200265
[40]   Big Data and Predictive Analytics Recalibrating Expectations [J].
Shah, Nilay D. ;
Steyerberg, Ewout W. ;
Kent, David M. .
JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2018, 320 (01) :27-28