Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?; [Ebenen der Explizierbarkeit für medizinische künstliche Intelligenz: Was brauchen wir normativ und was können wir technisch erreichen?]

被引:0
作者
Ursin F. [1 ]
Lindner F. [2 ]
Ropinski T. [3 ]
Salloch S. [1 ]
Timmermann C. [4 ]
机构
[1] Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, Hannover
[2] Institute for Artificial Intelligence, Ulm University, Ulm
[3] Visual Computing Group, Ulm University, Ulm
[4] Ethics of Medicine, Medical Faculty, University of Augsburg, Augsburg
关键词
Explainability; Informed consent; Intelligibility; Interpretability; Transparency;
D O I
10.1007/s00481-023-00761-x
中图分类号
学科分类号
摘要
Definition of the problem: The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI? Arguments: We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example. Conclusion: We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements. © 2023, The Author(s).
引用
收藏
页码:173 / 199
页数:26
相关论文
共 87 条
  • [1] Adadi A., Berrada M., Peeking inside the black-box: A survey on explainable artificial intelligence (XAI), IEEE Access, 6, pp. 52138-52160, (2018)
  • [2] Adadi A., Berrada M., Explainable AI for healthcare: From black box to interpretable models, Embedded systems and artificial intelligence, 1076, pp. 327-337, (2020)
  • [3] Amann J., Blasimme A., Vayena E., Frey D., Madai V.I., Explainability for artificial intelligence in healthcare: a multidisciplinary perspective, BMC Med Inform Decis Mak, 20, (2020)
  • [4] FDA Cleared AI Algorithms, (2023)
  • [5] Andorno R., The right not to know: an autonomy based approach, J Med Ethics, 30, 5, pp. 435-439, (2004)
  • [6] Arbelaez Ossa L., Starke G., Lorenzini G., Vogt J.E., Shaw D.M., Elger B.S., Re-focusing explainability in medicine, Digit Health, 8, (2022)
  • [7] Astromske K., Peicius E., Astromskis P., Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations, AI Soc, 36, pp. 509-520, (2021)
  • [8] Barredo Arrieta A., Diaz-Rodriguez N., Del Ser J., Bennetot A., Tabik S., Barbado A., Et al., Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI, Inf Fusion, 58, pp. 82-115, (2020)
  • [9] Beauchamp T.L., Childress J.F., Principles of biomedical ethics, (2019)
  • [10] Becker P., Patientenautonomie und informierte Einwilligung: Schlüssel und Barriere medizinischer Behandlungen, (2019)