Applications of interpretable deep learning in neuroimaging: A comprehensive review

被引:1
作者
Munroe, Lindsay [1 ]
da Silva, Mariana [2 ]
Heidari, Faezeh [3 ]
Grigorescu, Irina [2 ]
Dahan, Simon [2 ]
Robinson, Emma C. [2 ]
Deprez, Maria [2 ]
So, Po-Wah [1 ]
机构
[1] Kings Coll London, Dept Neuroimaging, London, England
[2] Kings Coll London, Sch Biomed Engn & Imaging Sci, London, England
[3] Univ Eastern Finland, Inst Clin Med, Kuopio, Finland
来源
IMAGING NEUROSCIENCE | 2024年 / 2卷
基金
英国工程与自然科学研究理事会;
关键词
interpretable deep learning; explainable AI; neuroimaging; intrinsic interpretability; CONVOLUTIONAL NEURAL-NETWORKS; MRI DATA; BRAIN; PREDICTION; CONNECTIVITY; MECHANISMS; IMAGE;
D O I
10.1162/imag_a_00214
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Clinical adoption of deep learning models has been hindered, in part, because the "black-box" nature of neural networks leads to concerns regarding their trustworthiness and reliability. These concerns are particularly relevant in the field of neuroimaging due to the complex brain phenotypes and inter-subject heterogeneity often encountered. The challenge can be addressed by interpretable deep learning (iDL) methods that enable the visualisation and interpretation of the inner workings of deep learning models. This study systematically reviewed the literature on neuroimaging applications of iDL methods and critically analysed how iDL explanation properties were evaluated. Seventy-five studies were included, and ten categories of iDL methods were identified. We also reviewed five properties of iDL explanations that were analysed in the included studies: biological validity, robustness, continuity, selectivity, and downstream task performance. We found that the most popular iDL approaches used in the literature may be sub-optimal for neuroimaging data, and we discussed possible future directions for the field.
引用
收藏
页码:17 / 37
页数:21
相关论文
共 161 条
[1]   Robust hybrid deep learning models for Alzheimer's progression detection [J].
Abuhmed, Tamer ;
El-Sappagh, Shaker ;
Alonso, Jose M. .
KNOWLEDGE-BASED SYSTEMS, 2021, 213
[2]  
Adebayo J, 2018, ADV NEUR IN, V31
[3]  
Afshar P, 2018, IEEE IMAGE PROC, P3129, DOI 10.1109/ICIP.2018.8451379
[4]   Exploring Evaluation Methods for Interpretable Machine Learning: A Survey [J].
Alangari, Nourah ;
Menai, Mohamed El Bachir ;
Mathkour, Hassan ;
Almosallam, Ibrahim .
INFORMATION, 2023, 14 (08)
[5]  
[Anonymous], IXI Dataset
[6]   Interpretation of Brain Morphology in Association to Alzheimer's Disease Dementia Classification Using Graph Convolutional Networks on Triangulated Meshes [J].
Azcona, Emanuel A. ;
Besson, Pierre ;
Wu, Yunan ;
Punjabi, Arjun ;
Martersteck, Adam ;
Dravid, Amil ;
Parrish, Todd B. ;
Bandt, S. Kathleen ;
Katsaggelos, Aggelos K. .
SHAPE IN MEDICAL IMAGING, SHAPEMI 2020, 2020, 12474 :95-107
[7]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[8]   Data Descriptor: Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features [J].
Bakas, Spyridon ;
Akbari, Hamed ;
Sotiras, Aristeidis ;
Bilello, Michel ;
Rozycki, Martin ;
Kirby, Justin S. ;
Freymann, John B. ;
Farahani, Keyvan ;
Davatzikos, Christos .
SCIENTIFIC DATA, 2017, 4
[9]  
Balduzzi D., 2017, P 34 INT C MACH LEAR, V70, P342
[10]  
Bass C, 2020, ADV NEUR IN, V33