A Methodology for Reliability Analysis of Explainable Machine Learning: Application to Endocrinology Diseases

被引:0
|
作者
Ketata, Firas [1 ,2 ]
Al Masry, Zeina [1 ]
Yacoub, Slim [2 ]
Zerhouni, Noureddine [1 ]
机构
[1] Inst FEMTO ST, SUPMICROTECH, CNRS, F-25000 Besancon, France
[2] INSAT, Remote Sensing Lab & Informat Syst Spatial Referen, Tunis 1080, Tunisia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Reliability; Measurement; Stability analysis; Training; Robustness; Computational modeling; Accuracy; Machine learning; Medical diagnosis; Decision support systems; Explainable machine learning; reliability analysis; concordance; stability; generalizability; medical decision support; ARTIFICIAL-INTELLIGENCE; TRUSTWORTHY; RISK;
D O I
10.1109/ACCESS.2024.3431691
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) has transformed various sectors, including healthcare, by enabling the extraction of complex knowledge and predictions from vast datasets. However, the opacity of ML models, often referred to as "black boxes," hinders their integration into medical practice. Explainable AI (XAI) has emerged as a crucial area for enhancing the transparency and understandability of ML model decisions, particularly in healthcare where reliability and accuracy are paramount. However, the reliability of the explanations provided by ML models remains a major challenge. This mainly concerns the difficulty of maintaining the validity and relevance of the new training and test data explanations. In this study, we propose a structured approach to enhance and evaluate the reliability of explanations provided by ML models in healthcare. We aim to improve the reliability of explainability by combining the XAI approaches with the k-fold technique. We then developed several metrics to assess the generalizability, concordance, and stability of the combined XAI and k-fold approach, which we applied to case studies on hypothyroidism and diabetes risk prediction using SHAP and LIME frameworks. Our findings reveal that the SHAP approach combined with k-fold exhibits superior generalizability, stability, and concordance compared to the combination of LIME with k-fold. SHAP and k-fold integration provide reliable explanations for hypothyroidism and diabetes predictions, providing strong concordance with the internal explainability of the random forest model, the best generalizability, and good stability. This structured approach can bolster practitioner's confidence in ML models and facilitate their adoption in healthcare settings.
引用
收藏
页码:101921 / 101935
页数:15
相关论文
共 50 条
  • [1] Evaluating Explainable Machine Learning Models for Clinicians
    Scarpato, Noemi
    Nourbakhsh, Aria
    Ferroni, Patrizia
    Riondino, Silvia
    Roselli, Mario
    Fallucchi, Francesca
    Barbanti, Piero
    Guadagni, Fiorella
    Zanzotto, Fabio Massimo
    COGNITIVE COMPUTATION, 2024, 16 (04) : 1436 - 1446
  • [2] Explainable machine learning model and reliability analysis for flexural capacity prediction of RC beams strengthened in flexure with FRCM
    Wakjira, Tadesse G.
    Ibrahim, Mohamed
    Ebead, Usama
    Alam, M. Shahira
    ENGINEERING STRUCTURES, 2022, 255
  • [3] Explainable Machine Learning Models for Brain Diseases: Insights from a Systematic Review
    Mallma, Mirko Jerber Rodriguez
    Zuloaga-Rotta, Luis
    Borja-Rosales, Ruben
    Mallma, Josef Renato Rodriguez
    Vilca-Aguilar, Marcos
    Salas-Ojeda, Maria
    Mauricio, David
    NEUROLOGY INTERNATIONAL, 2024, 16 (06): : 1285 - 1307
  • [4] Machine Learning Applications Endocrinology and Metabolism Research: An Overview
    Hong, Namki
    Park, Heajeong
    Rhee, Yumie
    ENDOCRINOLOGY AND METABOLISM, 2020, 35 (01) : 71 - 84
  • [5] Explainable Machine Learning for LoRaWAN Link Budget Analysis and Modeling
    Hosseinzadeh, Salaheddin
    Ashawa, Moses
    Owoh, Nsikak
    Larijani, Hadi
    Curtis, Krystyna
    SENSORS, 2024, 24 (03)
  • [6] Using Explainable Machine Learning Methods to Predict the Survivability Rate of Pediatric Respiratory Diseases
    Kumar, Roshan
    Srirama, V
    Chadaga, Krishnaraj
    Muralikrishna, H.
    Sampathila, Niranjana
    Prabhu, Srikanth
    Chadaga, Rajagopala
    IEEE ACCESS, 2024, 12 : 189515 - 189534
  • [7] Explainable artificial intelligence and machine learning: novel approaches to face infectious diseases challenges
    Giacobbe, Daniele Roberto
    Zhang, Yudong
    de la Fuente, Jose
    ANNALS OF MEDICINE, 2023, 55 (02)
  • [8] Continuous Management of Machine Learning-Based Application Behavior
    Anisetti, Marco
    Ardagna, Claudio A.
    Bena, Nicola
    Damiani, Ernesto
    Panero, Paolo G.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2025, 18 (01) : 112 - 125
  • [9] A machine learning methodology for the analysis of workplace accidents
    Matias, J. M.
    Rivas, T.
    Martin, J. E.
    Taboada, J.
    INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS, 2008, 85 (3-4) : 559 - 578
  • [10] Explainable Machine Learning for Intrusion Detection
    Bellegdi, Sameh
    Selamat, Ali
    Olatunji, Sunday O.
    Fujita, Hamido
    Krejcar, Ondfrej
    ADVANCES AND TRENDS IN ARTIFICIAL INTELLIGENCE: THEORY AND APPLICATIONS, IEA-AIE 2024, 2024, 14748 : 122 - 134