Artificial Intelligence and Patient-Centered Decision-Making

被引:1
作者
Bjerring J.C. [1 ]
Busch J. [1 ]
机构
[1] Department of Philosophy, Aarhus University, Jens Chr. Skous Vej 7, Aarhus C
关键词
Artificial intelligence and medicine; Black-box medicine; Evidence-based medicine; Medical decision-making; Patient-centered medicine;
D O I
10.1007/s13347-019-00391-6
中图分类号
学科分类号
摘要
Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient. © 2020, Springer Nature B.V.
引用
收藏
页码:349 / 371
页数:22
相关论文
共 70 条
[1]  
Bernat J.L., Peterson L.M., Patient-centered informed consent in surgical practice, Archives of Surgery, 141, 1, pp. 86-92, (2006)
[2]  
Binns R., van Kleek M., Veale M., Lyngs U., Zhao J., Shadbolt N., ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, (2018)
[3]  
Burrell J., How the machine “thinks”: understanding opacity in machine learning algorithms, Big Data and Society, 3, 1, pp. 1-12, (2016)
[4]  
Captain S., Can IBM’s Watson Do It All. Fast Company, (2017)
[5]  
Challen R., Denny J., Pitt M., Gompels L., Edwards T., Tsaneva-Atanasova K., Artificial intelligence, bias and clinical safety, BMJ Qual Saf, 28, 3, pp. 231-237, (2019)
[6]  
Calo R., Robotics and the lessons of cyberlaw, California Law Review, 103, 3, pp. 513-563, (2015)
[7]  
Danaher J., Robots, law and the retribution-gap, Ethics and Information Technology, 18, 4, pp. 299-309, (2016)
[8]  
De Fauw J., Ledsam J.R., Romera-Paredes B., Nikolov S., Tomasev N., Blackwell S., Et al., Clinically applicable deep learning for diagnosis and referral in retinal disease, Nature medicine, 24, 9, pp. 1342-1350, (2018)
[9]  
Delaney L.J., Patient-centred care as an approach to improving health care in Australia, Collegian, 25, 1, pp. 119-123, (2018)
[10]  
De Maeseneer J., van Weel C., Daeren L., Leyns C., Decat P., Boeckxstaens P., Avonts D., Willems S., From “patient” to “person” to “people”: the need for integrated, people-centered healthcare, The International Journal of Person Centered Medicine, 2, 3, pp. 601-614, (2012)