Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification

被引:1
作者
Deshpande, Gopikrishna [1 ,2 ,3 ,4 ,5 ,6 ]
Masood, Janzaib [1 ]
Huynh, Nguyen [1 ]
Denney Jr, Thomas S. [1 ,2 ,3 ,4 ]
Dretsch, Michael N. [7 ]
机构
[1] Auburn Univ, Neuroimaging Ctr, Dept Elect & Comp Engn, Auburn, AL 36849 USA
[2] Auburn Univ, Dept Psychol Sci, Auburn, AL 36849 USA
[3] Alabama Adv Imaging Consortium, Birmingham, AL 35294 USA
[4] Auburn Univ, Ctr Neurosci, Auburn, AL 36849 USA
[5] Natl Inst Mental Hlth & Neurosci, Dept Psychiat, Bengaluru 560029, India
[6] Indian Inst Technol Hyderabad, Dept Heritage Sci & Technol, Hyderabad 502285, India
[7] Joint Base Lewis McChord, Walter Reed Army Inst Res West, West, WA 98433 USA
关键词
Resting-state functional magnetic resonance; resting-state functional connectivity; interpretable deep learning; POSTTRAUMATIC-STRESS-DISORDER; ANTERIOR CINGULATE CORTEX; RESTING-STATE FMRI; FUNCTIONAL CONNECTIVITY; NETWORKS; ABUSE; MEMORIES; VETERANS; DISEASE; SERVICE;
D O I
10.1109/ACCESS.2024.3388911
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNN) are increasingly being used in neuroimaging research for the diagnosis of brain disorders and understanding of human brain. Despite their impressive performance, their usage in medical applications will be limited unless there is more transparency on how these algorithms arrive at their decisions. We address this issue in the current report. A DNN classifier was trained to discriminate between healthy subjects and those with posttraumatic stress disorder (PTSD) using brain connectivity obtained from functional magnetic resonance imaging data. The classifier provided 90% accuracy. Brain connectivity features important for classification were generated for a pool of test subjects and permutation testing was used to identify significantly discriminative connections. Such heatmaps of significant paths were generated from 10 different interpretability algorithms based on variants of layer-wise relevance and gradient attribution methods. Since different interpretability algorithms make different assumptions about the data and model, their explanations had both commonalities and differences. Therefore, we developed a consensus across interpretability methods, which aligned well with the existing knowledge about brain alterations underlying PTSD. The confident identification of more than 20 regions, acknowledged for their relevance to PTSD in prior studies, was achieved with a voting score exceeding 8 and a family-wise correction threshold below 0.05. Our work illustrates how robustness and physiological plausibility of explanations can be achieved in interpreting classifications obtained from DNNs in diagnostic neuroimaging applications by evaluating convergence across methods. This will be crucial for trust in AI-based medical diagnostics in the future.
引用
收藏
页码:55474 / 55490
页数:17
相关论文
共 50 条
[31]   An interpretable Bayesian deep learning-based approach for sustainable clean energy [J].
Ezzat D. ;
Ahmed E. ;
Soliman M. ;
Hassanien A.E. .
Neural Computing and Applications, 2024, 36 (27) :17145-17163
[32]   How to improve the application potential of deep learning model in HVAC fault diagnosis: Based on pruning and interpretable deep learning method [J].
Gao, Yuan ;
Miyata, Shohei ;
Akashi, Yasunori .
APPLIED ENERGY, 2023, 348
[33]   An innovation scalp acupuncture prescription for post-stroke aphasia: A neuroimaging-based validation study [J].
Xu, Minjie ;
Zhang, Binlong ;
Chen, Yuhang ;
Zhang, Qingsu ;
Tan, Zhongjian ;
Li, Yanli ;
Kong, Qiao ;
Zhang, Leyi ;
He, Junyi ;
Wang, Haifang ;
Xie, Wei ;
Gao, Ying ;
Chang, Jingling .
BRAIN RESEARCH BULLETIN, 2025, 225
[34]   Deep Transfer Learning Based Classification Model for COVID-19 Disease [J].
Pathak, Y. ;
Tiwari, A. ;
Stalin, S. ;
Singh, S. ;
Shukla, P. K. .
IRBM, 2022, 43 (02) :87-92
[35]   Lightweight Traffic Classification Model Based on Deep Learning [J].
Sun, Chongxin ;
Chen, Bo ;
Bu, Youjun ;
Zhang, Surong ;
Zhang, Desheng ;
Jiang, Bingbing .
WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
[36]   Classification of TrashNet Dataset Based on Deep Learning Models [J].
Aral, Rahmi Arda ;
Keskin, Seref Recep ;
Kaya, Mahmut ;
Haciomeroglu, Murat .
2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2018, :2058-2062
[37]   Deep Learning based Automatic Signal Modulation Classification [J].
Lu, Jingyang ;
Li, Yi ;
Chen, Genshe ;
Shen, Dan ;
Tian, Xin ;
Khanh Pham .
SENSORS AND SYSTEMS FOR SPACE APPLICATIONS XII, 2019, 11017
[38]   Development of a Bispectral index score prediction model based on an interpretable deep learning algorithm [J].
Hwang, Eugene ;
Park, Hee -Sun ;
Kim, Hyun-Seok ;
Kim, Jin-Young ;
Jeong, Hanseok ;
Kim, Junetae ;
Kim, Sung-Hoon .
ARTIFICIAL INTELLIGENCE IN MEDICINE, 2023, 143
[39]   Multiphysical Interpretable Deep Learning Network for Oil Spill Identification Based on SAR Images [J].
Fan, Jianchao ;
Sui, Zitai ;
Wang, Xinzhe .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62 :1-15
[40]   Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond [J].
Xuhong Li ;
Haoyi Xiong ;
Xingjian Li ;
Xuanyu Wu ;
Xiao Zhang ;
Ji Liu ;
Jiang Bian ;
Dejing Dou .
Knowledge and Information Systems, 2022, 64 :3197-3234