Relative explainability and double standards in medical decision-making Should medical AI be subjected to higher standards in medical decision-making than doctors?

被引:11
作者
Kempt, Hendrik [1 ]
Heilinger, Jan-Christoph [1 ]
Nagel, Saskia K. [1 ]
机构
[1] Rhein Westfal TH Aachen, Appl Eth Grp, Theaterpl 14, D-52062 Aachen, Germany
关键词
Explainability; Heuristics; Double standards; Certifiability; Interpretability; Responsibility; Diagnostics; Medical decision-making; ARTIFICIAL-INTELLIGENCE;
D O I
10.1007/s10676-022-09646-x
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of explainability relative to other available norms of explainable decision-making. Third, in pointing out that we usually accept heuristics and uses of bounded rationality for medical decision-making by physicians, we argue that the explainability of medical decisions should not be measured against an idealized diagnostic process, but according to practical considerations. We conclude, fourth, to resolve the issue of explainability-standards by relocating the issue to the AI's certifiability and interpretability.
引用
收藏
页数:10
相关论文
共 37 条
[1]  
Bjerring JC, 2020, Philos Technol, DOI [DOI 10.1007/S13347-019-00391-6, 10.1007/s13347-019-00391-6]
[2]   Reintroducing Prediction to Explanation [J].
Douglas, Heather E. .
PHILOSOPHY OF SCIENCE, 2009, 76 (04) :444-463
[3]   Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare [J].
Duran, Juan M. .
ARTIFICIAL INTELLIGENCE, 2021, 297
[4]  
Elmore Joann G., 2010, Journal of the National Cancer Institute Monographs, P204, DOI 10.1093/jncimonographs/lgq038
[5]   Towards Transparency by Design for Artificial Intelligence [J].
Felzmann, Heike ;
Fosch-Villaronga, Eduard ;
Lutz, Christoph ;
Tamo-Larrieux, Aurelia .
SCIENCE AND ENGINEERING ETHICS, 2020, 26 (06) :3333-3361
[6]   Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns [J].
Felzmann, Heike ;
Villaronga, Eduard Fosch ;
Lutz, Christoph ;
Tamo-Larrieux, Aurelia .
BIG DATA & SOCIETY, 2019, 6 (01)
[7]   Inscrutable Processes: Algorithms, Agency, and Divisions of Deliberative Labour [J].
Ferreira, Marinus .
JOURNAL OF APPLIED PHILOSOPHY, 2021, 38 (04) :646-661
[8]   AI4People-An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations [J].
Floridi, Luciano ;
Cowls, Josh ;
Beltrametti, Monica ;
Chatila, Raja ;
Chazerand, Patrice ;
Dignum, Virginia ;
Luetge, Christoph ;
Madelin, Robert ;
Pagallo, Ugo ;
Rossi, Francesca ;
Schafer, Burkhard ;
Valcke, Peggy ;
Vayena, Effy .
MINDS AND MACHINES, 2018, 28 (04) :689-707
[9]  
Grote T., 2020, TECHNOLOGY ANTHR DIM, V1, DOI 10.1007/978-3-476-04896-7_8
[10]   On the ethics of algorithmic decision-making in healthcare [J].
Grote, Thomas ;
Berens, Philipp .
JOURNAL OF MEDICAL ETHICS, 2020, 46 (03) :205-211