How does the model make predictions? A systematic literature review on the explainability power of machine learning in healthcare

被引:58
作者
Allgaier, Johannes [1 ]
Mulansky, Lena [1 ]
Draelos, Rachel Lea [2 ]
Pryss, Ruediger [1 ]
机构
[1] Julius Maximilians Univ Wurzburg JMU, Inst Clin Epidemiol & Biometry, Wurzburg, Germany
[2] Cydoc, Durham, NC USA
关键词
Explainable artificial intelligence; XAI; Interpretable machine learning; PRISMA; Medicine; Healthcare; Review; ARTIFICIAL-INTELLIGENCE; SKIN-CANCER; BLACK-BOX; EXPLANATIONS;
D O I
10.1016/j.artmed.2023.102616
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Background: Medical use cases for machine learning (ML) are growing exponentially. The first hospitals are already using ML systems as decision support systems in their daily routine. At the same time, most ML systems are still opaque and it is not clear how these systems arrive at their predictions.Methods: In this paper, we provide a brief overview of the taxonomy of explainability methods and review popular methods. In addition, we conduct a systematic literature search on PubMed to investigate which explainable artificial intelligence (XAI) methods are used in 450 specific medical supervised ML use cases, how the use of XAI methods has emerged recently, and how the precision of describing ML pipelines has evolved over the past 20 years.Results: A large fraction of publications with ML use cases do not use XAI methods at all to explain ML pre-dictions. However, when XAI methods are used, open-source and model-agnostic explanation methods are more commonly used, with SHapley Additive exPlanations (SHAP) and Gradient Class Activation Mapping (Grad -CAM) for tabular and image data leading the way. ML pipelines have been described in increasing detail and uniformity in recent years. However, the willingness to share data and code has stagnated at about one-quarter.Conclusions: XAI methods are mainly used when their application requires little effort. The homogenization of reports in ML use cases facilitates the comparability of work and should be advanced in the coming years. Experts who can mediate between the worlds of informatics and medicine will become more and more in demand when using ML systems due to the high complexity of the domain.
引用
收藏
页数:13
相关论文
共 83 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
Adebayo J, 2018, ADV NEUR IN, V31
[3]   Towards FAIR Explainable AI: a standardized ontology for mapping XAI solutions to use cases, explanations, and AI systems [J].
Adhikari, Ajaya ;
Wenink, Edwin ;
van der Waa, Jasper ;
Bouter, Cornelis ;
Tolios, Ioannis ;
Raaijmakers, Stephan .
PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS, PETRA 2022, 2022, :562-568
[4]   Prediction of Tinnitus Perception Based on Daily Life MHealth Data Using Country Origin and Season [J].
Allgaier, Johannes ;
Schlee, Winfried ;
Probst, Thomas ;
Pryss, Ruediger .
JOURNAL OF CLINICAL MEDICINE, 2022, 11 (15)
[5]   Permutation importance: a corrected feature importance measure [J].
Altmann, Andre ;
Tolosi, Laura ;
Sander, Oliver ;
Lengauer, Thomas .
BIOINFORMATICS, 2010, 26 (10) :1340-1347
[6]  
Ancona M., 2019, Explainable AI: Interpreting, explaining and visualizing deep learning, P169
[7]  
[Anonymous], 2015, ICLR WORKSHOP TRACK
[8]   Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review [J].
Antoniadi, Anna Markella ;
Du, Yuhan ;
Guendouz, Yasmine ;
Wei, Lan ;
Mazo, Claudia ;
Becker, Brett A. ;
Mooney, Catherine .
APPLIED SCIENCES-BASEL, 2021, 11 (11)
[9]  
Arik SO, 2020, arXiv
[10]  
Azarpanah A, 2020, CSDH SCHN 2020