Interpretable Deep Learning for Neuroimaging-Based Diagnostic Classification

被引:0
作者
Deshpande, Gopikrishna [1 ,2 ,3 ,4 ,5 ,6 ]
Masood, Janzaib [1 ]
Huynh, Nguyen [1 ]
Denney Jr, Thomas S. [1 ,2 ,3 ,4 ]
Dretsch, Michael N. [7 ]
机构
[1] Auburn Univ, Neuroimaging Ctr, Dept Elect & Comp Engn, Auburn, AL 36849 USA
[2] Auburn Univ, Dept Psychol Sci, Auburn, AL 36849 USA
[3] Alabama Adv Imaging Consortium, Birmingham, AL 35294 USA
[4] Auburn Univ, Ctr Neurosci, Auburn, AL 36849 USA
[5] Natl Inst Mental Hlth & Neurosci, Dept Psychiat, Bengaluru 560029, India
[6] Indian Inst Technol Hyderabad, Dept Heritage Sci & Technol, Hyderabad 502285, India
[7] Joint Base Lewis McChord, Walter Reed Army Inst Res West, West, WA 98433 USA
关键词
Resting-state functional magnetic resonance; resting-state functional connectivity; interpretable deep learning; POSTTRAUMATIC-STRESS-DISORDER; ANTERIOR CINGULATE CORTEX; RESTING-STATE FMRI; FUNCTIONAL CONNECTIVITY; NETWORKS; ABUSE; MEMORIES; VETERANS; DISEASE; SERVICE;
D O I
10.1109/ACCESS.2024.3388911
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNN) are increasingly being used in neuroimaging research for the diagnosis of brain disorders and understanding of human brain. Despite their impressive performance, their usage in medical applications will be limited unless there is more transparency on how these algorithms arrive at their decisions. We address this issue in the current report. A DNN classifier was trained to discriminate between healthy subjects and those with posttraumatic stress disorder (PTSD) using brain connectivity obtained from functional magnetic resonance imaging data. The classifier provided 90% accuracy. Brain connectivity features important for classification were generated for a pool of test subjects and permutation testing was used to identify significantly discriminative connections. Such heatmaps of significant paths were generated from 10 different interpretability algorithms based on variants of layer-wise relevance and gradient attribution methods. Since different interpretability algorithms make different assumptions about the data and model, their explanations had both commonalities and differences. Therefore, we developed a consensus across interpretability methods, which aligned well with the existing knowledge about brain alterations underlying PTSD. The confident identification of more than 20 regions, acknowledged for their relevance to PTSD in prior studies, was achieved with a voting score exceeding 8 and a family-wise correction threshold below 0.05. Our work illustrates how robustness and physiological plausibility of explanations can be achieved in interpreting classifications obtained from DNNs in diagnostic neuroimaging applications by evaluating convergence across methods. This will be crucial for trust in AI-based medical diagnostics in the future.
引用
收藏
页码:55474 / 55490
页数:17
相关论文
共 50 条
  • [21] Decoding pain: uncovering the factors that affect the performance of neuroimaging-based pain models
    Lee, Dong Hee
    Lee, Sungwoo
    Woo, Choong-Wan
    PAIN, 2025, 166 (02) : 360 - 375
  • [22] Traffic accident severity prediction based on interpretable deep learning model
    Pei, Yulong
    Wen, Yuhang
    Pan, Sheng
    TRANSPORTATION LETTERS-THE INTERNATIONAL JOURNAL OF TRANSPORTATION RESEARCH, 2024,
  • [23] Interpretable deep learning approach for oral cancer classification using guided attention inference network
    Figueroa, Kevin Chew
    Song, Bofan
    Sunny, Sumsum
    Li, Shaobai
    Gurushanth, Keerthi
    Mendonca, Pramila
    Mukhia, Nirza
    Patrick, Sanjana
    Gurudath, Shubha
    Raghavan, Subhashini
    Imchen, Tsusennaro
    Leivon, Shirley T.
    Kolur, Trupti
    Shetty, Vivek
    Bushan, Vidya
    Ramesh, Rohan
    Pillai, Vijay
    Wilder-Smith, Petra
    Sigamani, Alben
    Suresh, Amritha
    Kuriakose, Moni Abraham
    Birur, Praveen
    Liang, Rongguang
    JOURNAL OF BIOMEDICAL OPTICS, 2022, 27 (01)
  • [24] Deep Learning in Neuroimaging: Promises and challenges
    Yan, Weizheng
    Qu, Gang
    Hu, Wenxing
    Abrol, Anees
    Cai, Biao
    Qiao, Chen
    Plis, Sergey M.
    Wang, Yu-Ping
    Sui, Jing
    Calhoun, Vince D.
    IEEE SIGNAL PROCESSING MAGAZINE, 2022, 39 (02) : 87 - 98
  • [25] Deep learning for neuroimaging: a validation study
    Plis, Sergey M.
    Hjelm, Devon R.
    Salakhutdinov, Ruslan
    Allen, Elena A.
    Bockholt, Henry J.
    Long, Jeffrey D.
    Johnson, Hans J.
    Paulsen, Jane S.
    Turner, Jessica A.
    Calhoun, Vince D.
    FRONTIERS IN NEUROSCIENCE, 2014, 8
  • [26] Neuroimaging-based analysis of DBS outcomes in pediatric dystonia: Insights from the GEPESTIM registry
    Al-Fatly, Bassam
    Giesler, Sabina J.
    Oxenford, Simon
    Li, Ningfei
    Dembek, Till A.
    Achtzehn, Johannes
    Krause, Patricia
    Visser-Vandewalle, Veerle
    Krauss, Joachim K.
    Runge, Joachim
    Tadic, Vera
    Baeumer, Tobias
    Schnitzler, Alfons
    Vesper, Jan
    Wirths, Jochen
    Timmermann, Lars
    Kuehn, Andrea A.
    Koy, Anne
    NEUROIMAGE-CLINICAL, 2023, 39
  • [27] Music Genre Classification Based on Deep Learning
    Zhang, Wenlong
    MOBILE INFORMATION SYSTEMS, 2022, 2022
  • [28] Human Population Variability in Cortical Anatomy and its Impact on Neuroimaging-Based Prediction of Mental Traits
    Dickie, Erin
    Wiljer, Ella
    Manogaran, Mathuvanthi
    Jeyachandra, Jerrold
    Grennan, Laura
    Hawco, Colin
    Voineskos, Aristotle
    BIOLOGICAL PSYCHIATRY, 2021, 89 (09) : S172 - S173
  • [29] Designing interpretable deep learning applications for functional genomics: a quantitative analysis
    van Hilten, Arno
    Katz, Sonja
    Saccenti, Edoardo
    Niessen, Wiro J.
    Roshchupkin, Gennady, V
    BRIEFINGS IN BIOINFORMATICS, 2024, 25 (05)
  • [30] AIST: An Interpretable Attention-Based Deep Learning Model for Crime Prediction
    Rayhan, Yeasir
    Hashem, Tanzima
    ACM TRANSACTIONS ON SPATIAL ALGORITHMS AND SYSTEMS, 2023, 9 (02)