Trust does not need to be human: it is possible to trust medical AI

被引:25
作者
Ferrario, Andrea [1 ]
Loi, Michele [2 ,3 ]
Vigano, Eleonora [2 ,3 ]
机构
[1] Swiss Fed Inst Technol, Dept Management Technol & Econ, Zurich, Switzerland
[2] Univ Zurich, Digital Soc Initiat DSI, Zurich, Switzerland
[3] Univ Zurich, Inst Biomed Eth & Hist Med IBME, Zurich, Switzerland
关键词
philosophical ethics; clinical ethics; information technology;
D O I
10.1136/medethics-2020-106922
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
In his recent article 'Limits of trust in medical AI,' Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human-human interactions. To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy. This attitude does not imply specific properties in the AI system that in fact only humans can have. This account of trust is applicable, in particular, to all cases where a physician relies on the medical AI predictions to support his or her decision making.
引用
收藏
页码:437 / 438
页数:2
相关论文
共 4 条
  • [1] [Anonymous], 1974, The Limits of Organization
  • [2] Ferrario A., 2020, Philosophy & Technology, V33, P523, DOI DOI 10.1007/S13347-019-00378-3
  • [3] Limits of trust in medical AI
    Hatherley, Joshua James
    [J]. JOURNAL OF MEDICAL ETHICS, 2020, 46 (07) : 478 - 481
  • [4] Nickel P.J., 2012, In Handbook of Risk Theory, P857, DOI [DOI 10.1007/978-94-007-1433-5_34, 10.1007/978-94-007-5243-6_14, DOI 10.1007/978-94-007-5243-6_14]