Applications of and issues with machine learning in medicine: Bridging the gap with explainable AI

被引:6
作者
Karako, Kenji [1 ]
Tang, Wei [1 ,2 ]
机构
[1] Univ Tokyo, Grad Sch Med, Dept Surg, Hepatobiliary Pancreat Surg Div, 7-3-1 Hongo,Bunkyo Ku, Tokyo 1138655, Japan
[2] Natl Ctr Global Hlth & Med, Tokyo, Japan
关键词
machine learning; deep learning; explainable AI; medical applications;
D O I
10.5582/bst.2024.01342
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
In recent years, machine learning, and particularly deep learning, has shown remarkable potential in various fields, including medicine. Advanced techniques like convolutional neural networks and transformers have enabled high-performance predictions for complex problems, making machine learning a valuable tool in medical decision-making. From predicting postoperative complications to assessing disease risk, machine learning has been actively used to analyze patient data and assist healthcare professionals. However, the "black box" problem, wherein the internal workings of machine learning models are opaque and difficult to interpret, poses a significant challenge in medical applications. The lack of transparency may hinder trust and acceptance by clinicians and patients, making the development of explainable AI (XAI) techniques essential. XAI aims to provide both global and local explanations for machine learning models, offering insights into how predictions are made and which factors influence these outcomes. In this article, we explore various applications of machine learning in medicine, describe commonly used algorithms, and discuss explainable AI as a promising solution to enhance the interpretability of these models. By integrating explainability into machine learning, we aim to ensure its ethical and practical application in healthcare, ultimately improving patient outcomes and supporting personalized treatment strategies.
引用
收藏
页码:497 / 504
页数:8
相关论文
共 30 条
[1]  
[Anonymous], 2001, The Elements of Statistical Learning
[2]  
Arik SO, 2021, AAAI CONF ARTIF INTE, V35, P6679
[3]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[4]  
Breiman L., 2017, Classification and Regression Trees, DOI 10.1201/9781315139470/CLASSIFICATION-REGRESSION-TREES-LEO-BREIMAN-JEROME-FRIEDMAN-RICHARD-OLSHEN-CHARLES-STONE
[5]  
Burkart N, 2021, J ARTIF INTELL RES, V70, P245
[6]   XGBoost: A Scalable Tree Boosting System [J].
Chen, Tianqi ;
Guestrin, Carlos .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :785-794
[7]   Utilizing Machine Learning Methods for Preoperative Prediction of Postsurgical Mortality and Intensive Care Unit Admission [J].
Chiew, Calvin J. ;
Liu, Nan ;
Wong, Ting Hway ;
Sim, Yilin E. ;
Abdullah, Hairil R. .
ANNALS OF SURGERY, 2020, 272 (06) :1133-1139
[8]   SUPPORT-VECTOR NETWORKS [J].
CORTES, C ;
VAPNIK, V .
MACHINE LEARNING, 1995, 20 (03) :273-297
[9]  
COX DR, 1958, J R STAT SOC B, V20, P215
[10]   A data-driven approach to predicting diabetes and cardiovascular disease with machine learning [J].
Dinh, An ;
Miertschin, Stacey ;
Young, Amber ;
Mohanty, Somya D. .
BMC MEDICAL INFORMATICS AND DECISION MAKING, 2019, 19 (01)