Medical artificial intelligence for clinicians: the lost cognitive perspective

被引:10
作者
Tikhomirov, Lana [1 ]
Semmler, Carolyn [3 ]
McCradden, Melissa [1 ,2 ]
Searston, Rachel [3 ]
Ghassemi, Marzyeh [4 ,5 ]
Oakden-Rayner, Lauren [1 ]
机构
[1] Univ Adelaide, Australian Inst Machine Learning, Adelaide, SA 5000, Australia
[2] Univ Toronto, Hosp Sick Children, Sch Publ Hlth, Toronto, ON, Canada
[3] Univ Adelaide, Sch Psychol, Adelaide, SA, Australia
[4] MIT, Dept Elect Engn & Comp Sci, Cambridge, MA 02139 USA
[5] MIT, Inst Med & Evaluat Sci, Cambridge, MA USA
关键词
DECISION-MAKING; AUTOMATION;
D O I
10.1016/S2589-7500(24)00095-5
中图分类号
R-058 [];
学科分类号
摘要
The development and commercialisation of medical decision systems based on artificial intelligence (AI) far outpaces our understanding of their value for clinicians. Although applicable across many forms of medicine, we focus on characterising the diagnostic decisions of radiologists through the concept of ecologically bounded reasoning, review the differences between clinician decision making and medical AI model decision making, and reveal how these differences pose fundamental challenges for integrating AI into radiology. We argue that clinicians are contextually motivated, mentally resourceful decision makers, whereas AI models are contextually stripped, correlational decision makers, and discuss misconceptions about clinician-AI interaction stemming from this misalignment of capabilities. We outline how future research on clinician-AI interaction could better address the cognitive considerations of decision making and be used to enhance the safety and usability of AI models in high-risk medical decision-making contexts.
引用
收藏
页码:e589 / e594
页数:6
相关论文
共 60 条
[21]   Homo Heuristicus: Why Biased Minds Make Better Inferences [J].
Gigerenzer, Gerd ;
Brighton, Henry .
TOPICS IN COGNITIVE SCIENCE, 2009, 1 (01) :107-143
[22]   Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic [J].
Griffiths, Thomas L. ;
Lieder, Falk ;
Goodman, Noah D. .
TOPICS IN COGNITIVE SCIENCE, 2015, 7 (02) :217-229
[23]   A systematic review on the use of explainability in deep learning systems for computer aided diagnosis in radiology: Limited use of explainable AI? [J].
Groen, Arjan M. ;
Kraan, Rik ;
Amirkhan, Shahira F. ;
Daams, Joost G. ;
Maas, Mario .
EUROPEAN JOURNAL OF RADIOLOGY, 2022, 157
[24]   Healthcare provider evaluation of machine learning-directed care: reactions to deployment on a randomised controlled study [J].
Hong, Julian C. ;
Patel, Pranalee ;
Eclov, Neville C. W. ;
Stephens, Sarah J. ;
Mowery, Yvonne M. ;
Tenenbaum, Jessica D. ;
Palta, Manisha .
BMJ HEALTH & CARE INFORMATICS, 2023, 30 (01)
[25]   Prediction as extraction of discretion [J].
Hong, Sun-ha .
BIG DATA & SOCIETY, 2023, 10 (01)
[26]  
Kathleen L., 1996, HUM FAC TRANSP, P201
[27]   Before and beyond trust: reliance in medical AI [J].
Kerasidou, Charalampia ;
Kerasidou, Angeliki ;
Buscher, Monika ;
Wilkinson, Stephen .
JOURNAL OF MEDICAL ETHICS, 2022, 48 (11) :852-856
[28]   INTERPRETING CHEST RADIOGRAPHS WITHOUT VISUAL SEARCH [J].
KUNDEL, HL ;
NODINE, CF .
RADIOLOGY, 1975, 116 (03) :527-532
[29]   AI in medicine must be explainable [J].
Kundu, Shinjini .
NATURE MEDICINE, 2021, 27 (08) :1328-1328
[30]  
Lazaro May Jorella, 2021, Proceedings of the Human Factors and Ergonomics Society Annual Meeting, P1425, DOI 10.1177/1071181321651154