Relative explainability and double standards in medical decision-making Should medical AI be subjected to higher standards in medical decision-making than doctors?

被引:11
作者
Kempt, Hendrik [1 ]
Heilinger, Jan-Christoph [1 ]
Nagel, Saskia K. [1 ]
机构
[1] Rhein Westfal TH Aachen, Appl Eth Grp, Theaterpl 14, D-52062 Aachen, Germany
关键词
Explainability; Heuristics; Double standards; Certifiability; Interpretability; Responsibility; Diagnostics; Medical decision-making; ARTIFICIAL-INTELLIGENCE;
D O I
10.1007/s10676-022-09646-x
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of explainability relative to other available norms of explainable decision-making. Third, in pointing out that we usually accept heuristics and uses of bounded rationality for medical decision-making by physicians, we argue that the explainability of medical decisions should not be measured against an idealized diagnostic process, but according to practical considerations. We conclude, fourth, to resolve the issue of explainability-standards by relocating the issue to the AI's certifiability and interpretability.
引用
收藏
页数:10
相关论文
共 37 条
[21]   The causal explanatory functions of medical diagnoses [J].
Maung, Hane Htut .
THEORETICAL MEDICINE AND BIOETHICS, 2017, 38 (01) :41-59
[22]   Computer knows best? The need for value-flexibility in medical AI [J].
McDougall, Rosalind J. .
JOURNAL OF MEDICAL ETHICS, 2019, 45 (03) :156-160
[23]   Surgeon communication behaviors that lead patients to not recommend the surgeon to family members or friends: Analysis and impact [J].
McLafferty, Robert B. ;
Williams, Reed G. ;
Lambert, Andrew D. ;
Dunnington, Gary L. .
SURGERY, 2006, 140 (04) :616-622
[24]   Explaining Explanations in Al [J].
Mittelstadt, Brent ;
Russell, Chris ;
Wachter, Sandra .
FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2019, :279-288
[25]   Trust in medical technology by patients and healthcare providers in obstetric work systems [J].
Montague, Enid N. H. ;
Winchester, Woodrow W., III ;
Kleiner, Brian M. .
BEHAVIOUR & INFORMATION TECHNOLOGY, 2010, 29 (05) :541-554
[26]   Artificial intelligence and algorithmic bias: implications for health systems [J].
Panch, Trishan ;
Mattie, Heather ;
Atun, Rifat .
JOURNAL OF GLOBAL HEALTH, 2019, 9 (02)
[27]   On the Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities [J].
Reyes, Mauricio ;
Meier, Raphael ;
Pereira, Sergio ;
Silva, Carlos A. ;
Dahlweid, Fried-Michael ;
Von Tengg-Kobligk, Hendrik ;
Summers, Ronald M. ;
Wiest, Roland .
RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2020, 2 (03)
[28]   Trust, transparency, and openness: How inclusion of cultural values shapes Nordic national public policy strategies for artificial intelligence (AI) [J].
Robinson, Stephen Cory .
TECHNOLOGY IN SOCIETY, 2020, 63
[30]   Responsibility beyond design: Physicians' requirements for ethical medical AI [J].
Sand, Martin ;
Duran, Juan Manuel ;
Jongsma, Karin Rolanda .
BIOETHICS, 2022, 36 (02) :162-169