A Methodology for Reliability Analysis of Explainable Machine Learning: Application to Endocrinology Diseases

被引:0
|
作者
Ketata, Firas [1 ,2 ]
Al Masry, Zeina [1 ]
Yacoub, Slim [2 ]
Zerhouni, Noureddine [1 ]
机构
[1] Inst FEMTO ST, SUPMICROTECH, CNRS, F-25000 Besancon, France
[2] INSAT, Remote Sensing Lab & Informat Syst Spatial Referen, Tunis 1080, Tunisia
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Reliability; Measurement; Stability analysis; Training; Robustness; Computational modeling; Accuracy; Machine learning; Medical diagnosis; Decision support systems; Explainable machine learning; reliability analysis; concordance; stability; generalizability; medical decision support; ARTIFICIAL-INTELLIGENCE; TRUSTWORTHY; RISK;
D O I
10.1109/ACCESS.2024.3431691
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning (ML) has transformed various sectors, including healthcare, by enabling the extraction of complex knowledge and predictions from vast datasets. However, the opacity of ML models, often referred to as "black boxes," hinders their integration into medical practice. Explainable AI (XAI) has emerged as a crucial area for enhancing the transparency and understandability of ML model decisions, particularly in healthcare where reliability and accuracy are paramount. However, the reliability of the explanations provided by ML models remains a major challenge. This mainly concerns the difficulty of maintaining the validity and relevance of the new training and test data explanations. In this study, we propose a structured approach to enhance and evaluate the reliability of explanations provided by ML models in healthcare. We aim to improve the reliability of explainability by combining the XAI approaches with the k-fold technique. We then developed several metrics to assess the generalizability, concordance, and stability of the combined XAI and k-fold approach, which we applied to case studies on hypothyroidism and diabetes risk prediction using SHAP and LIME frameworks. Our findings reveal that the SHAP approach combined with k-fold exhibits superior generalizability, stability, and concordance compared to the combination of LIME with k-fold. SHAP and k-fold integration provide reliable explanations for hypothyroidism and diabetes predictions, providing strong concordance with the internal explainability of the random forest model, the best generalizability, and good stability. This structured approach can bolster practitioner's confidence in ML models and facilitate their adoption in healthcare settings.
引用
收藏
页码:101921 / 101935
页数:15
相关论文
共 50 条
  • [21] Machine Learning Applications for Reliability Engineering: A Review
    Payette, Mathieu
    Abdul-Nour, Georges
    SUSTAINABILITY, 2023, 15 (07)
  • [22] Pointwise Reliability of Machine Learning Models: Application to Cardiovascular Risk Assessment
    Henriques, Jorge
    Rocha, Teresa
    Paredes, Simao
    Gil, Paulo
    Loureiro, Joao
    Petrella, Lorena
    9TH EUROPEAN MEDICAL AND BIOLOGICAL ENGINEERING CONFERENCE, VOL 2, EMBEC 2024, 2024, 113 : 213 - 222
  • [23] An explainable machine learning model for sentiment analysis of online reviews
    Mrabti, Soufiane El
    EL-Mekkaoui, Jaouad
    Hachmoud, Adil
    Lazaar, Mohamed
    KNOWLEDGE-BASED SYSTEMS, 2024, 302
  • [24] Application of explainable machine learning for real-time safety analysis toward a connected vehicle environment
    Yuan, Chen
    Li, Ye
    Huang, Helai
    Wang, Shiqi
    Sun, Zhenhao
    Wang, Honggang
    ACCIDENT ANALYSIS AND PREVENTION, 2022, 171
  • [25] Explainable and visualizable machine learning models to predict biochemical recurrence of prostate cancer
    Lu, Wenhao
    Zhao, Lin
    Wang, Shenfan
    Zhang, Huiyong
    Jiang, Kangxian
    Ji, Jin
    Chen, Shaohua
    Wang, Chengbang
    Wei, Chunmeng
    Zhou, Rongbin
    Wang, Zuheng
    Li, Xiao
    Wang, Fubo
    Wei, Xuedong
    Hou, Wenlei
    CLINICAL & TRANSLATIONAL ONCOLOGY, 2024, 26 (09): : 2369 - 2379
  • [26] Explainable Machine Learning Models Assessing Lending Risk
    Nassiri, Khalid
    Akhloufi, Moulay A.
    NAVIGATING THE TECHNOLOGICAL TIDE: THE EVOLUTION AND CHALLENGES OF BUSINESS MODEL INNOVATION, VOL 3, ICBT 2024, 2024, 1082 : 519 - 529
  • [27] Explainable AI: Machine Learning Interpretation in Blackcurrant Powders
    Przybyl, Krzysztof
    SENSORS, 2024, 24 (10)
  • [28] Predicting kidney allograft survival with explainable machine learning
    Fabreti-Oliveira, Raquel A.
    Nascimento, Evaldo
    de Melo Santosa, Luiz Henrique
    de Oliveira Santos, Marina Ribeiro
    Veloso, Adriano Alonso
    TRANSPLANT IMMUNOLOGY, 2024, 85
  • [29] DroneGuard: An Explainable and Efficient Machine Learning Framework for Intrusion Detection in Drone Networks
    Ihekoronye, Vivian Ukamaka
    Ajakwe, Simeon Okechukwu
    Lee, Jae Min
    Kim, Dong-Seong
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (07): : 7708 - 7722
  • [30] Application of Machine Learning in Orthodontics: A Bibliometric Analysis
    Dashti, Mahmood
    Zare, Niusha
    Tajbakhsh, Neda
    Noble, James
    Hashemi, Sara
    Ghasemi, Shohreh
    Hashemi, Seyed Saman
    Elsaraj, Sherif M.
    ADVANCES IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING, 2024, 4 (02): : 2014 - 2026