A Methodology for Reliability Analysis of Explainable Machine Learning: Application to Endocrinology Diseases

被引:0
|
作者
Ketata, Firas [1 ,2 ]
Al Masry, Zeina [1 ]
Yacoub, Slim [2 ]
Zerhouni, Noureddine [1 ]
机构
[1] Inst FEMTO ST, SUPMICROTECH, CNRS, F-25000 Besancon, France
[2] INSAT, Remote Sensing Lab & Informat Syst Spatial Referen, Tunis 1080, Tunisia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Reliability; Measurement; Stability analysis; Training; Robustness; Computational modeling; Accuracy; Machine learning; Medical diagnosis; Decision support systems; Explainable machine learning; reliability analysis; concordance; stability; generalizability; medical decision support; ARTIFICIAL-INTELLIGENCE; TRUSTWORTHY; RISK;
D O I
10.1109/ACCESS.2024.3431691
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) has transformed various sectors, including healthcare, by enabling the extraction of complex knowledge and predictions from vast datasets. However, the opacity of ML models, often referred to as "black boxes," hinders their integration into medical practice. Explainable AI (XAI) has emerged as a crucial area for enhancing the transparency and understandability of ML model decisions, particularly in healthcare where reliability and accuracy are paramount. However, the reliability of the explanations provided by ML models remains a major challenge. This mainly concerns the difficulty of maintaining the validity and relevance of the new training and test data explanations. In this study, we propose a structured approach to enhance and evaluate the reliability of explanations provided by ML models in healthcare. We aim to improve the reliability of explainability by combining the XAI approaches with the k-fold technique. We then developed several metrics to assess the generalizability, concordance, and stability of the combined XAI and k-fold approach, which we applied to case studies on hypothyroidism and diabetes risk prediction using SHAP and LIME frameworks. Our findings reveal that the SHAP approach combined with k-fold exhibits superior generalizability, stability, and concordance compared to the combination of LIME with k-fold. SHAP and k-fold integration provide reliable explanations for hypothyroidism and diabetes predictions, providing strong concordance with the internal explainability of the random forest model, the best generalizability, and good stability. This structured approach can bolster practitioner's confidence in ML models and facilitate their adoption in healthcare settings.
引用
收藏
页码:101921 / 101935
页数:15
相关论文
共 50 条
  • [41] Explainable machine learning models with privacy
    Bozorgpanah, Aso
    Torra, Vicenc
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2024, 13 (01) : 31 - 50
  • [42] Explainable Machine Learning for Trustworthy AI
    Giannotti, Fosca
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2022, 356 : 3 - 3
  • [43] Principles and Practice of Explainable Machine Learning
    Belle, Vaishak
    Papantonis, Ioannis
    FRONTIERS IN BIG DATA, 2021, 4
  • [44] Explainable machine learning models with privacy
    Aso Bozorgpanah
    Vicenç Torra
    Progress in Artificial Intelligence, 2024, 13 : 31 - 50
  • [45] Explainable machine learning in cybersecurity: A survey
    Yan, Feixue
    Wen, Sheng
    Nepal, Surya
    Paris, Cecile
    Xiang, Yang
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (12) : 12305 - 12334
  • [46] Explainable machine learning for knee osteoarthritis diagnosis based on a novel fuzzy feature selection methodology
    Christos Kokkotis
    Charis Ntakolia
    Serafeim Moustakidis
    Giannis Giakas
    Dimitrios Tsaopoulos
    Physical and Engineering Sciences in Medicine, 2022, 45 : 219 - 229
  • [47] Application of explainable machine learning for estimating direct and diffuse components of solar irradiance
    Rajagukguk, Rial A.
    Lee, Hyunjin
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [48] Explainable machine learning for knee osteoarthritis diagnosis based on a novel fuzzy feature selection methodology
    Kokkotis, Christos
    Ntakolia, Charis
    Moustakidis, Serafeim
    Giakas, Giannis
    Tsaopoulos, Dimitrios
    PHYSICAL AND ENGINEERING SCIENCES IN MEDICINE, 2022, 45 (01) : 219 - 229
  • [49] Interpretable Prediction of a Decentralized Smart Grid Based on Machine Learning and Explainable Artificial Intelligence
    Cifci, Ahmet
    IEEE ACCESS, 2025, 13 : 36285 - 36305
  • [50] Explainable and Fair AI: Balancing Performance in Financial and Real Estate Machine Learning Models
    Acharya, Deepak Bhaskar
    Divya, B.
    Kuppan, Karthigeyan
    IEEE ACCESS, 2024, 12 : 154022 - 154034