Explainable Deep Learning for Interpretable Brain Tumor Diagnosis from MRI Images

被引:0
作者
Manziuk, Eduard [1 ]
Barmak, Olexander [1 ]
Krak, Iurii [2 ,3 ]
Petliak, Nataliia [1 ]
Jin, Zhenzhen [4 ]
Radiuk, Pavlo [1 ]
机构
[1] Khmelnytskyi Natl Univ, Khmelnytskyi, Ukraine
[2] Taras Shevchenko Natl Univ Kyiv, Kyiv, Ukraine
[3] Glushkov Cybernet Inst, Kyiv, Ukraine
[4] Guangxi Univ, Nanning, Peoples R China
来源
LECTURE NOTES IN DATA ENGINEERING, COMPUTATIONAL INTELLIGENCE, AND DECISION-MAKING, VOL 1 | 2024年 / 219卷
关键词
medical image analysis; neural networks; deep learning; interpretability; explainability; magnetic resonance imaging; classification; diagnostic rules; ARTIFICIAL-INTELLIGENCE; TRUSTWORTHY;
D O I
10.1007/978-3-031-70959-3_17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a new method for analyzing medical images using neural networks to improve the interpretation and explainability of diagnostic decisions. The method combines the advantages of deep learning for accurate image analysis and boolean logic for interpretability. The obtained results demonstrate the possibility of obtaining transparent, interpretable neural network decisions. The proposed method includes the following steps: building a complex convolutional neural network (VGG-16) for image analysis; applying an attention mechanism to highlight key areas; and developing a simplified interpreted model (DRN) to understand the decisions. The results showed that VGG16 achieved a classification accuracy of over 0.95, while DRN extracted logical rules based on the features identified by VGG-16 with an accuracy of 0.76. The DRN rule analysis allowed us to understand the impact of individual areas on the decision. The combination of accurate VGG16 and interpreted decisions ensures high quality and transparency. The combination of convolutional network and DRN allowed to combine high accuracy with interpretability, providing medical professionals with an understanding of key features for making diagnostic decisions. The paper demonstrates that the proposed approach achieves a balance between the accuracy of an intelligent system and its interpretability, which is key to the trust and application of artificial intelligence in medicine. The results obtained pave the way for the safe and responsible implementation of AI technologies in medical practice in accordance with ethical principles and regulatory requirements.
引用
收藏
页码:326 / 348
页数:23
相关论文
共 30 条
[1]   A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion [J].
Albahri, A. S. ;
Duhaim, Ali M. ;
Fadhel, Mohammed A. ;
Alnoor, Alhamzah ;
Baqer, Noor S. ;
Alzubaidi, Laith ;
Albahri, O. S. ;
Alamoodi, A. H. ;
Bai, Jinshuai ;
Salhi, Asma ;
Santamaria, Jose ;
Ouyang, Chun ;
Gupta, Ashish ;
Gu, Yuantong ;
Deveci, Muhammet .
INFORMATION FUSION, 2023, 96 :156-191
[2]  
Arbi A., 2012, ARTIF INTELL, P483
[3]   DYNAMICS OF DELAYED CELLULAR NEURAL NETWORKS IN THE STEPANOV PSEUDO ALMOST AUTOMORPHIC SPACE [J].
Arbi, Adnene ;
Cao, Jinde ;
Es-Saiydy, Mohssine ;
Zarhouni, Mohammed ;
Zitane, Mohamed .
DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS-SERIES S, 2022, 15 (11) :3097-3109
[4]   Delta-Differentiable Weighted Pseudo-Almost Automorphicity on Time-Space Scales for a Novel Class of High-Order Competitive Neural Networks with WPAA Coefficients and Mixed Delays [J].
Arbi, Adnene ;
Alsaedi, Ahmed ;
Cao, Jinde .
NEURAL PROCESSING LETTERS, 2018, 47 (01) :203-232
[5]  
Barmak O., 2020, CEUR WORKSHOP P, V2711, P53
[6]   Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption [J].
Bedue, Patrick ;
Fritzsche, Albrecht .
JOURNAL OF ENTERPRISE INFORMATION MANAGEMENT, 2022, 35 (02) :530-549
[7]  
Chatila R, 2019, INTEL SYST CONTR AUT, V95, P11, DOI 10.1007/978-3-030-12524-0_2
[8]  
Dhar Tribikram, 2023, IEEE Transactions on Technology and Society, P68, DOI [10.1109/tts.2023.3234203, 10.1109/TTS.2023.3234203]
[9]  
digital-strategy.ec.europa, Ethics guidelines for trustworthy AI | shaping Europe's digital future
[10]  
Durán JM, 2021, J MED ETHICS, V47, P329, DOI [10.1136/medethics-2021-107531, 10.1136/medethics-2020-106820]