Augmenting Medical Diagnosis Decisions? An Investigation into Physicians' Decision-Making Process with Artificial Intelligence

被引:153
作者
Jussupow, Ekaterina [1 ]
Spohrer, Kai [1 ]
Heinzl, Armin [1 ]
Gawlitza, Joshua [2 ]
机构
[1] Univ Mannheim, Business Sch, Chair Gen Management & Informat Syst, Area Informat Syst, D-68161 Mannheim, Germany
[2] Tech Univ Munich, Univ Hosp Rechts Isar, Inst Diagnost & Intervent Radiol, Thorac Imaging, D-81675 Munich, Germany
关键词
decision making; artificial intelligence; decision support; metacognition; healthcare; dual process; advice taking; SUPPORT-SYSTEMS; PRODUCT RECOMMENDATIONS; E-COMMERCE; AUTOMATION; ALGORITHMS; CONTRIBUTE; JUDGMENT; ADVICE; AGENTS; RISE;
D O I
10.1287/isre.2020.0980
中图分类号
G25 [图书馆学、图书馆事业]; G35 [情报学、情报工作];
学科分类号
1205 ; 120501 ;
摘要
Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions. Compared with rule-based systems, however, these systems are less transparent and their errors less predictable. Much research currently aims to improve AI technologies and debates their societal implications. Surprisingly little effort is spent on understanding the cognitive challenges of decision augmentation with AI-based systems although these systems make it more difficult for decision makers to evaluate the correctness of system advice and to decide whether to reject or accept it. As little is known about the cognitive mechanisms that underlie such evaluations, we take an inductive approach to understand how AI advice influences physicians' decision-making process. We conducted experiments with a total of 68 novice and 12 experienced physicians who diagnosed patient cases with an AI-based system that provided both correct and incorrect advice. Based on qualitative data from think-aloud protocols, interviews, and questionnaires, we elicit five decision-making patterns and develop a process model of medical diagnosis decision augmentation with AI advice. We show that physicians use second order cognitive processes, namely metacognitions, to monitor and control their reasoning while assessing AI advice. These metacognitions determine whether physicians are able to reap the full benefits of AI or not. Specifically, wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers' own reasoning (self monitoring) and metacognitions related to the AI-based system (system monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial information search. Our findings provide a first perspective on the metacognitive mechanisms that decision makers use to evaluate system advice. Overall, our study sheds light on an overlooked facet of decision augmentation with AI, namely, the crucial role of human actors in compensating for technological errors.
引用
收藏
页码:713 / 735
页数:24
相关论文
共 66 条