A Methodology for Reliability Analysis of Explainable Machine Learning: Application to Endocrinology Diseases

被引:0
|
作者
Ketata, Firas [1 ,2 ]
Al Masry, Zeina [1 ]
Yacoub, Slim [2 ]
Zerhouni, Noureddine [1 ]
机构
[1] Inst FEMTO ST, SUPMICROTECH, CNRS, F-25000 Besancon, France
[2] INSAT, Remote Sensing Lab & Informat Syst Spatial Referen, Tunis 1080, Tunisia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Reliability; Measurement; Stability analysis; Training; Robustness; Computational modeling; Accuracy; Machine learning; Medical diagnosis; Decision support systems; Explainable machine learning; reliability analysis; concordance; stability; generalizability; medical decision support; ARTIFICIAL-INTELLIGENCE; TRUSTWORTHY; RISK;
D O I
10.1109/ACCESS.2024.3431691
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) has transformed various sectors, including healthcare, by enabling the extraction of complex knowledge and predictions from vast datasets. However, the opacity of ML models, often referred to as "black boxes," hinders their integration into medical practice. Explainable AI (XAI) has emerged as a crucial area for enhancing the transparency and understandability of ML model decisions, particularly in healthcare where reliability and accuracy are paramount. However, the reliability of the explanations provided by ML models remains a major challenge. This mainly concerns the difficulty of maintaining the validity and relevance of the new training and test data explanations. In this study, we propose a structured approach to enhance and evaluate the reliability of explanations provided by ML models in healthcare. We aim to improve the reliability of explainability by combining the XAI approaches with the k-fold technique. We then developed several metrics to assess the generalizability, concordance, and stability of the combined XAI and k-fold approach, which we applied to case studies on hypothyroidism and diabetes risk prediction using SHAP and LIME frameworks. Our findings reveal that the SHAP approach combined with k-fold exhibits superior generalizability, stability, and concordance compared to the combination of LIME with k-fold. SHAP and k-fold integration provide reliable explanations for hypothyroidism and diabetes predictions, providing strong concordance with the internal explainability of the random forest model, the best generalizability, and good stability. This structured approach can bolster practitioner's confidence in ML models and facilitate their adoption in healthcare settings.
引用
收藏
页码:101921 / 101935
页数:15
相关论文
共 50 条
  • [31] An Explainable Machine Learning Framework for Intrusion Detection Systems
    Wang, Maonan
    Zheng, Kangfeng
    Yang, Yanqing
    Wang, Xiujuan
    IEEE ACCESS, 2020, 8 : 73127 - 73141
  • [32] Explainable AI: A Review of Machine Learning Interpretability Methods
    Linardatos, Pantelis
    Papastefanopoulos, Vasilis
    Kotsiantis, Sotiris
    ENTROPY, 2021, 23 (01) : 1 - 45
  • [33] Understanding cirrus clouds using explainable machine learning
    Jeggle, Kai
    Neubauer, David
    Camps-Valls, Gustau
    Lohmann, Ulrike
    ENVIRONMENTAL DATA SCIENCE, 2023, 2
  • [34] Machine learning application in autoimmune diseases: State of art and future prospectives
    Danieli, Maria Giovanna
    Brunetto, Silvia
    Gammeri, Luca
    Palmeri, Davide
    Claudi, Ilaria
    Shoenfeld, Yehuda
    Gangemi, Sebastiano
    AUTOIMMUNITY REVIEWS, 2024, 23 (02)
  • [35] Identifying diagnostic biomarkers for Erythemato-Squamous diseases using explainable machine learning
    Wang, Zheng
    Chang, Li
    Shi, Tong
    Hu, Hui
    Wang, Chong
    Lin, Kaibin
    Zhang, Jianglin
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 100
  • [36] Using an Explainable Machine Learning Approach to Characterize Earth System Model Errors: Application of SHAP Analysis to Modeling Lightning Flash Occurrence
    Silva, Sam J.
    Keller, Christoph A.
    Hardin, Joseph
    JOURNAL OF ADVANCES IN MODELING EARTH SYSTEMS, 2022, 14 (04)
  • [37] Application of machine learning methodology for PET-based definition of lung cancer
    Kerhet, A.
    Small, C.
    Quon, H.
    Riauka, T.
    Schrader, L.
    Greiner, R.
    Yee, D.
    McEwan, A.
    Roa, W.
    CURRENT ONCOLOGY, 2010, 17 (01) : 41 - 47
  • [38] Automated Stroke Prediction Using Machine Learning: An Explainable and Exploratory Study With a Web Application for Early Intervention
    Mridha, Krishna
    Ghimire, Sandesh
    Shin, Jungpil
    Aran, Anmol
    Uddin, Md. Mezbah
    Mridha, M. F.
    IEEE ACCESS, 2023, 11 : 52288 - 52308
  • [39] Explainable Machine Learning via Argumentation
    Prentzas, Nicoletta
    Pattichis, Constantinos
    Kakas, Antonis
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT III, 2023, 1903 : 371 - 398
  • [40] Explainable machine learning for diffraction patterns
    Nawaz, Shah
    Rahmani, Vahid
    Pennicard, David
    Setty, Shabarish Pala Ramakantha
    Klaudel, Barbara
    Graafsma, Heinz
    JOURNAL OF APPLIED CRYSTALLOGRAPHY, 2023, 56 : 1494 - 1504