Global and local interpretability techniques of supervised machine learning black box models for numerical medical data

被引:37
作者
Hakkoum, Hajar [1 ]
Idri, Ali [1 ,2 ]
Abnane, Ibtissam [1 ]
机构
[1] Mohammed V Univ, SPM Res Team, ENSIAS, Rabat, Morocco
[2] Mohammed VI Polytech Univ, Ben Guerir, Morocco
关键词
interpretability; XAB; Explainability; black box; numerical data; medicine; SYSTEM;
D O I
10.1016/j.engappai.2023.107829
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The most effective machine learning classification techniques, such as artificial neural networks, are not easily interpretable, which limits their usefulness in critical areas, such as medicine, where errors can have severe consequences. Researchers have been working to balance the trade-off between the model performance and interpretability. In this study, seven interpretability techniques (global surrogate, accumulated local effects, local interpretable model-agnostic explanations (LIME), Shapley additive explanations (SHAP), model agnostic post hoc local explanations (MAPLE), local rule-based explanation (LORE), and Contextual Importance and Utility (CIU)) were evaluated to interpret five medical classifiers (multilayer perceptron, support vector machines, random forests, extreme gradient boosting, and naive bayes) using six model performance metrics and three interpretability technique metrics across six medical numerical datasets. The results confirmed the effectiveness of integrating global and local interpretability techniques, and highlighted the superior performance of global SHAP explainer and local CIU explanations. The quantitative evaluations of explanations emphasised the importance of assessing these interpretability techniques before employing them to interpret black box models.
引用
收藏
页数:18
相关论文
共 54 条
[1]  
Adhikari Ajaya, 2019, IEEE INT FUZZY SYST
[2]   A quantitative evaluation of explainable AI methods using the depth of decision tree [J].
Ahmed, Nizar Abdulaziz Mahyoub ;
Alpkocak, Adil .
TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2022, 30 (06) :2054-2072
[3]  
Alvarez-Melis D, 2018, ADV NEUR IN, V31
[4]  
Anjomshoae S, 2020, IJCAI PRICAI 2020 WO
[5]   Visualizing the effects of predictor variables in black box supervised learning models [J].
Apley, Daniel W. ;
Zhu, Jingyu .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2020, 82 (04) :1059-1086
[6]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[7]  
Bergstra J, 2012, J MACH LEARN RES, V13, P281
[8]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[9]   SMOTE: Synthetic minority over-sampling technique [J].
Chawla, Nitesh V. ;
Bowyer, Kevin W. ;
Hall, Lawrence O. ;
Kegelmeyer, W. Philip .
2002, American Association for Artificial Intelligence (16)
[10]   XGBoost: A Scalable Tree Boosting System [J].
Chen, Tianqi ;
Guestrin, Carlos .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :785-794