Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis

被引:24
作者
Muhammad, Dost [1 ]
Bendechache, Malika [1 ]
机构
[1] Univ Galway, ADAPT Res Ctr, Sch Comp Sci, Galway, Ireland
来源
COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL | 2024年 / 24卷
基金
爱尔兰科学基金会;
关键词
Explainable AI; Medical image analysis; XAI in medical imaging; XAI in healthcare; AI; PREDICTION; DECISIONS;
D O I
10.1016/j.csbj.2024.08.005
中图分类号
Q5 [生物化学]; Q7 [分子生物学];
学科分类号
071010 ; 081704 ;
摘要
This systematic literature review examines state-of-the-art Explainable Artificial Intelligence (XAI) methods applied to medical image analysis, discussing current challenges and future research directions, and exploring evaluation metrics used to assess XAI approaches. With the growing efficiency of Machine Learning (ML) and Deep Learning (DL) in medical applications, there's a critical need for adoption in healthcare. However, their "black-box" nature, where decisions are made without clear explanations, hinders acceptance in clinical settings where decisions have significant medicolegal consequences. Our review highlights the advanced XAI methods, identifying how they address the need for transparency and trust in ML/DL decisions. We also outline the challenges faced by these methods and propose future research directions to improve XAI in healthcare. This paper aims to bridge the gap between cutting-edge computational techniques and their practical application in healthcare, nurturing a more transparent, trustworthy, and effective use of AI in medical settings. The insights guide both research and industry, promoting innovation and standardisation in XAI implementation in healthcare.
引用
收藏
页码:542 / 560
页数:19
相关论文
共 111 条
[1]   Comparison of recent optimization algorithms for design optimization of a cam-follower mechanism [J].
Abderazek, Hammoudi ;
Yildiz, Ali Riza ;
Mirjalili, Seyedali .
KNOWLEDGE-BASED SYSTEMS, 2020, 191
[2]  
Adebayo J, 2018, ADV NEUR IN, V31
[3]  
Agarwal Rishabh, 2021, Advances in Neural Information Processing Systems, V34
[4]   Explainable AI for Retinoblastoma Diagnosis: Interpreting Deep Learning Models with LIME and SHAP [J].
Aldughayfiq, Bader ;
Ashfaq, Farzeen ;
Jhanjhi, N. Z. ;
Humayun, Mamoona .
DIAGNOSTICS, 2023, 13 (11)
[5]   Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence [J].
Ali, Sajid ;
Abuhmed, Tamer ;
El-Sappagh, Shaker ;
Muhammad, Khan ;
Alonso-Moral, Jose M. ;
Confalonieri, Roberto ;
Guidotti, Riccardo ;
Del Ser, Javier ;
Diaz-Rodriguez, Natalia ;
Herrera, Francisco .
INFORMATION FUSION, 2023, 99
[6]   Detection of COVID-19 in X-ray Images Using Densely Connected Squeeze Convolutional Neural Network (DCSCNN): Focusing on Interpretability and Explainability of the Black Box Model [J].
Ali, Sikandar ;
Hussain, Ali ;
Bhattacharjee, Subrata ;
Athar, Ali ;
Abdullah, Abdullah ;
Kim, Hee-Cheol .
SENSORS, 2022, 22 (24)
[7]  
Alomar A, 2023, 2023 14 INT C INF CO, P1
[8]  
Antoniadi AM, 2021, APPL COMPUT REV, V21, P5, DOI [10.1145/3477127.3477128, 10.1145/3412841.3441940]
[9]   AUTOMATING DETECTION OF PAPILLEDEMA IN PEDIATRIC FUNDUS IMAGES WITH EXPLAINABLE MACHINE LEARNING [J].
Avramidis, Kleanthis ;
Rostami, Mohammad ;
Chang, Melinda ;
Narayanan, Shrikanth .
2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, :3973-3977
[10]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)