Applications of interpretable deep learning in neuroimaging: A comprehensive review

被引:1
作者
Munroe, Lindsay [1 ]
da Silva, Mariana [2 ]
Heidari, Faezeh [3 ]
Grigorescu, Irina [2 ]
Dahan, Simon [2 ]
Robinson, Emma C. [2 ]
Deprez, Maria [2 ]
So, Po-Wah [1 ]
机构
[1] Kings Coll London, Dept Neuroimaging, London, England
[2] Kings Coll London, Sch Biomed Engn & Imaging Sci, London, England
[3] Univ Eastern Finland, Inst Clin Med, Kuopio, Finland
来源
IMAGING NEUROSCIENCE | 2024年 / 2卷
基金
英国工程与自然科学研究理事会;
关键词
interpretable deep learning; explainable AI; neuroimaging; intrinsic interpretability; CONVOLUTIONAL NEURAL-NETWORKS; MRI DATA; BRAIN; PREDICTION; CONNECTIVITY; MECHANISMS; IMAGE;
D O I
10.1162/imag_a_00214
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Clinical adoption of deep learning models has been hindered, in part, because the "black-box" nature of neural networks leads to concerns regarding their trustworthiness and reliability. These concerns are particularly relevant in the field of neuroimaging due to the complex brain phenotypes and inter-subject heterogeneity often encountered. The challenge can be addressed by interpretable deep learning (iDL) methods that enable the visualisation and interpretation of the inner workings of deep learning models. This study systematically reviewed the literature on neuroimaging applications of iDL methods and critically analysed how iDL explanation properties were evaluated. Seventy-five studies were included, and ten categories of iDL methods were identified. We also reviewed five properties of iDL explanations that were analysed in the included studies: biological validity, robustness, continuity, selectivity, and downstream task performance. We found that the most popular iDL approaches used in the literature may be sub-optimal for neuroimaging data, and we discussed possible future directions for the field.
引用
收藏
页码:17 / 37
页数:21
相关论文
共 161 条
[31]  
Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]
[32]  
Druzhinina P., 2021, Medical Imaging with Deep Learning
[33]   Jointly Discriminative and Generative Recurrent Neural Networks for Learning from fMRI [J].
Dvornek, Nicha C. ;
Li, Xiaoxiao ;
Zhuang, Juntang ;
Duncan, James S. .
MACHINE LEARNING IN MEDICAL IMAGING (MLMI 2019), 2019, 11861 :382-390
[34]   Testing the Robustness of Attribution Methods for Convolutional Neural Networks in MRI-Based Alzheimer's Disease Classification [J].
Eitel, Fabian ;
Ritter, Kerstin .
INTERPRETABILITY OF MACHINE INTELLIGENCE IN MEDICAL IMAGE COMPUTING AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT, 2020, 11797 :3-11
[35]  
Essemlali A, 2020, PR MACH LEARN RES, V121, P217
[36]  
Fisher A, 2019, J MACH LEARN RES, V20
[37]   Interpretable Explanations of Black Boxes by Meaningful Perturbation [J].
Fong, Ruth C. ;
Vedaldi, Andrea .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3449-3457
[38]  
Gao K, 2019, IEEE IJCNN, DOI 10.1109/ijcnn.2019.8852260
[39]   Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI [J].
Gong, Enhao ;
Pauly, John M. ;
Wintermark, Max ;
Zaharchuk, Greg .
JOURNAL OF MAGNETIC RESONANCE IMAGING, 2018, 48 (02) :330-340
[40]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144