Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons

被引:31
作者
Benzinger, Lasse [1 ]
Ursin, Frank [1 ]
Balke, Wolf-Tilo [2 ]
Kacprowski, Tim [3 ,4 ,5 ]
Salloch, Sabine [1 ]
机构
[1] Hannover Med Sch MHH, Inst Ethics Hist & Philosophy Med, Carl Neuberg Str 1, D-30625 Hannover, Germany
[2] TU Braunschweig, Inst Informat Syst, Braunschweig, Germany
[3] Tech Univ Carolo Wilhelmina Braunschweig, Peter L Reichertz Inst Med Informat, Div Data Sci Biomed, Braunschweig, Germany
[4] Hannover Med Sch, Braunschweig, Germany
[5] TU Braunschweig, Braunschweig Integrated Ctr Syst Biol BRICS, Braunschweig, Germany
关键词
Ethics; Clinical; Decision-making; artificial intelligence; Decision support systems; clinical; Systematic review; PATIENT PREFERENCE PREDICTOR; ALGORITHMIC ETHICS; CARE; CLASSIFICATION; LANDSCAPE; ISSUES;
D O I
10.1186/s12910-023-00929-6
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
Background Healthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decisionmaking. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use. Methods PubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract screened according to defined inclusion and exclusion criteria, resulting in 44 papers whose full texts were analysed using the Kuckartz method of qualitative text analysis. Results Artificial Intelligence might increase patient autonomy by improving the accuracy of predictions and allowing patients to receive their preferred treatment. It is thought to increase beneficence by providing reliable information, thereby, supporting surrogate decision-making. Some authors fear that reducing ethical decision-making to statistical correlations may limit autonomy. Others argue that AI may not be able to replicate the process of ethical deliberation because it lacks human characteristics. Concerns have been raised about issues of justice, as AI may replicate existing biases in the decision-making process. Conclusions The prospective benefits of using AI in clinical ethical decision-making are manifold, but its development and use should be undertaken carefully to avoid ethical pitfalls. Several issues that are central to the discussion of Clinical Decision Support Systems, such as justice, explicability or human-machine interaction, have been neglected in the debate on AI for clinical ethics so far. Trial registration This review is registered at Open Science Framework (https://osf.io/wvcs9).
引用
收藏
页数:9
相关论文
共 55 条
[21]   Inherent Bias in Artificial Intelligence-Based Decision Support Systems for Healthcare [J].
Gurupur, Varadraj ;
Wan, Thomas T. H. .
MEDICINA-LITHUANIA, 2020, 56 (03)
[22]   Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network [J].
Hannun, Awni Y. ;
Rajpurkar, Pranav ;
Haghpanahi, Masoumeh ;
Tison, Geoffrey H. ;
Bourn, Codie ;
Turakhia, Mintu P. ;
Ng, Andrew Y. .
NATURE MEDICINE, 2019, 25 (01) :65-+
[23]  
Hook C Christopher, 2013, Handb Clin Neurol, V118, P25, DOI 10.1016/B978-0-444-53501-6.00003-2
[24]  
Howard Dana, 2022, AJOB Empir Bioeth, V13, P125, DOI [10.1080/23294515.2022.2040643, 10.1080/23294515.2022.2040643]
[25]   Surrogates and Artificial Intelligence: Why AI Trumps Family [J].
Hubbard, Ryan ;
Greenblum, Jake .
SCIENCE AND ENGINEERING ETHICS, 2020, 26 (06) :3217-3227
[26]   Ethical difficulties in clinical practice: experiences of European doctors [J].
Hurst, S. A. ;
Perrier, A. ;
Pegoraro, R. ;
Reiter-Theil, S. ;
Forde, R. ;
Slowther, A-M ;
Garrett-Mayer, E. ;
Danis, M. .
JOURNAL OF MEDICAL ETHICS, 2007, 33 (01) :51-57
[27]   Autonomy-based criticisms of the patient preference predictor [J].
Jardas, E. J. ;
Wasserman, David ;
Wendler, David .
JOURNAL OF MEDICAL ETHICS, 2022, 48 (05) :304-310
[28]   The global landscape of AI ethics guidelines [J].
Jobin, Anna ;
Ienca, Marcello ;
Vayena, Effy .
NATURE MACHINE INTELLIGENCE, 2019, 1 (09) :389-399
[29]   Patient Preference Predictors, Apt Categorization, and Respect for Autonomy [J].
John, Stephen .
JOURNAL OF MEDICINE AND PHILOSOPHY, 2014, 39 (02) :169-177
[30]  
Kahrass H, 2024, PRISMA-Ethics-Reporting Guideline for Systematic Reviews on Ethics Literature: development, explanations and examples, DOI [10.31219/OSF.IO/G5KFB, DOI 10.31219/OSF.IO/G5KFB]