Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability

被引:157
|
作者
Coeckelbergh, Mark [1 ]
机构
[1] Univ Vienna, Dept Philosophy, Univ Str 7 NIG, A-1180 Vienna, Austria
关键词
Artificial intelligence (AI); Responsibility; Responsibility attribution; Responsibility conditions; Answerability; Moral agency; Moral patiency; Problem of many hands; Transparency; Explainability; MORAL RESPONSIBILITY; ETHICS;
D O I
10.1007/s11948-019-00146-8
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of "many things" is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or "patients" of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency.
引用
收藏
页码:2051 / 2068
页数:18
相关论文
共 50 条
  • [1] Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability
    Mark Coeckelbergh
    Science and Engineering Ethics, 2020, 26 : 2051 - 2068
  • [2] Explainability and artificial intelligence in medicine
    Reddy, Sandeep
    LANCET DIGITAL HEALTH, 2022, 4 (04):
  • [3] Designing Explainability of an Artificial Intelligence System
    Ha, Taehyun
    Lee, Sangwon
    Kim, Sangyeon
    PROCEEDINGS OF THE TECHNOLOGY, MIND, AND SOCIETY CONFERENCE (TECHMINDSOCIETY'18), 2018,
  • [4] A manifesto on explainability for artificial intelligence in medicine
    Combi, Carlo
    Amico, Beatrice
    Bellazzi, Riccardo
    Holzinger, Andreas
    Moore, Jason H.
    Zitnik, Marinka
    Holmes, John H.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2022, 133
  • [5] Causability and explainability of artificial intelligence in medicine
    Holzinger, Andreas
    Langs, Georg
    Denk, Helmut
    Zatloukal, Kurt
    Mueller, Heimo
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2019, 9 (04)
  • [6] Reflection on the equitable attribution of responsibility for artificial intelligence-assisted diagnosis and treatment decisions
    Chen, Antian
    Wang, Chenyu
    Zhang, Xinqing
    INTELLIGENT MEDICINE, 2023, 3 (02): : 139 - 143
  • [7] Explainability, Public Reason, and Medical Artificial Intelligence
    Da Silva, Michael
    ETHICAL THEORY AND MORAL PRACTICE, 2023, 26 (05) : 743 - 762
  • [8] Artificial intelligence explainability: the technical and ethical dimensions
    McDermid, John A.
    Jia, Yan
    Porter, Zoe
    Habli, Ibrahim
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2021, 379 (2207):
  • [9] Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
    Amann, Julia
    Blasimme, Alessandro
    Vayena, Effy
    Frey, Dietmar
    Madai, Vince I.
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2020, 20 (01)
  • [10] Artificial intelligence in pharmacovigilance: A regulatory perspective on explainability
    Pinheiro, Luis Correia
    Kurz, Xavier
    PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, 2022, 31 (12) : 1308 - 1310