Can recurrent models know more than we do?

被引:4
作者
Lewis, Noah [1 ,2 ]
Miller, Robyn [2 ,3 ]
Gazula, Harshvardhan [4 ]
Rahman, Md Mahfuzur [2 ,3 ]
Iraji, Armin [2 ,3 ]
Calhoun, Vince. D. [2 ,3 ]
Plis, Sergey [2 ,3 ]
机构
[1] Georgia Tech, Atlanta, GA 30332 USA
[2] TReNDS, Atlanta, GA 30303 USA
[3] Georgia State Univ, Atlanta, GA USA
[4] Princeton Neurosci Inst, Princeton, NJ USA
来源
2021 IEEE 9TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2021) | 2021年
关键词
Machine Learning; Deep Learning; model interpretability; neuroimaging; VISUAL SALIENCY;
D O I
10.1109/ICHI52183.2021.00046
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Model interpretation is an active research area, aiming to unravel the black box of deep learning models. One common approach, saliency, leverages the gradients of the model to produce a per-input map highlighting the features most important for a correct prediction. However, saliency faces challenges in recurrent models due to the "vanishing saliency" problem: gradients decay significantly towards earlier time steps. We alleviate this problem and improve the quality of saliency maps by augmenting recurrent models with an attention mechanism. We validate our methodology on synthetic data and compare these results to previous work. This synthetic experiment quantitatively validate that our methodology effectively captures the underlying signal of the input data. To show that our work is valid in a real-world setting, we apply it to functional magnetic resonance imaging (fMRI) data consisting of individuals with and without a diagnosis of schizophrenia. fMRI is notoriously complicated and a perfect candidate to show that our method works even for complex, high-dimensional data. Specifically, we use our methodology to find the relevant temporal information of the subjects and connect our findings to current and past research.
引用
收藏
页码:243 / 247
页数:5
相关论文
共 32 条
[31]   Reliability of the amplitude of low-frequency fluctuations in resting state fMRI in chronic schizophrenia [J].
Turner, Jessica A. ;
Chen, Hongji ;
Mathalon, Daniel H. ;
Allen, Elena A. ;
Mayer, Andrew R. ;
Abbott, Christopher C. ;
Calhoun, Vince D. ;
Bustillo, Juan .
PSYCHIATRY RESEARCH-NEUROIMAGING, 2012, 201 (03) :253-255
[32]   Learning Reliable Visual Saliency For Model Explanations [J].
Wang, Yulong ;
Su, Hang ;
Zhang, Bo ;
Hu, Xiaolin .
IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) :1796-1807