Trust in medical artificial intelligence: a discretionary account

被引:0
|
作者
Philip J. Nickel
机构
[1] Eindhoven University of Technology,Department of Philosophy and Ethics, School of Innovation Sciences
来源
Ethics and Information Technology | 2022年 / 24卷
关键词
Artificial intelligence; Trust in AI; Discretion; Normative expectations; Future of medicine;
D O I
暂无
中图分类号
学科分类号
摘要
This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI practitioners through the vehicle of an AI application. I conclude with four critical questions based on the discretionary account to determine if trust in particular AI applications is sound, and a brief discussion of the possibility that the main roles of the physician could be replaced by AI.
引用
收藏
相关论文
共 50 条
  • [1] Trust in medical artificial intelligence: a discretionary account
    Nickel, Philip J.
    ETHICS AND INFORMATION TECHNOLOGY, 2022, 24 (01)
  • [2] Intentional machines: A defence of trust in medical artificial intelligence
    Starke, Georg
    van den Brule, Rik
    Elger, Bernice Simone
    Haselager, Pim
    BIOETHICS, 2022, 36 (02) : 154 - 161
  • [3] Can We Trust Artificial Intelligence?
    Christian Budnik
    Philosophy & Technology, 2025, 38 (1)
  • [4] Trust in Artificial Intelligence: Exploring the Influence of Model Presentation and Model Interaction on Trust in a Medical Setting
    Wunn, Tina
    Sent, Danielle
    Peute, Linda W. P.
    Leijnen, Stefan
    ARTIFICIAL INTELLIGENCE-ECAI 2023 INTERNATIONAL WORKSHOPS, PT 2, XAI3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, 2023, 2024, 1948 : 76 - 86
  • [5] Attachment and trust in artificial intelligence
    Gillath, Omri
    Ai, Ting
    Branicky, Michael S.
    Keshmiri, Shawn
    Davison, Robert B.
    Spaulding, Ryan
    COMPUTERS IN HUMAN BEHAVIOR, 2021, 115
  • [6] Trust and Success of Artificial Intelligence in Medicine
    Miklavcic, Jonas
    BOGOSLOVNI VESTNIK-THEOLOGICAL QUARTERLY-EPHEMERIDES THEOLOGICAE, 2021, 81 (04): : 935 - 946
  • [7] Trust in Artificial Intelligence in Radiotherapy: A Survey
    Heising, Luca M.
    Ou, Carol X. J.
    Verhaegen, Frank
    Wolfs, Cecile J. A.
    Hoebers, Frank
    Jacobs, Maria J. G.
    RADIOTHERAPY AND ONCOLOGY, 2024, 194 : S2857 - S2860
  • [8] How Much to Trust Artificial Intelligence?
    Hurlburt, George
    IT PROFESSIONAL, 2017, 19 (04) : 7 - 11
  • [9] SHOULD WE TRUST ARTIFICIAL INTELLIGENCE?
    Sutrop, Margit
    TRAMES-JOURNAL OF THE HUMANITIES AND SOCIAL SCIENCES, 2019, 23 (04): : 499 - 522
  • [10] Transparency and trust in artificial intelligence systems
    Schmidt, Philipp
    Biessmann, Felix
    Teubner, Timm
    JOURNAL OF DECISION SYSTEMS, 2020, 29 (04) : 260 - 278