Can recurrent models know more than we do?

被引:3
|
作者
Lewis, Noah [1 ,2 ]
Miller, Robyn [2 ,3 ]
Gazula, Harshvardhan [4 ]
Rahman, Md Mahfuzur [2 ,3 ]
Iraji, Armin [2 ,3 ]
Calhoun, Vince. D. [2 ,3 ]
Plis, Sergey [2 ,3 ]
机构
[1] Georgia Tech, Atlanta, GA 30332 USA
[2] TReNDS, Atlanta, GA 30303 USA
[3] Georgia State Univ, Atlanta, GA USA
[4] Princeton Neurosci Inst, Princeton, NJ USA
来源
2021 IEEE 9TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2021) | 2021年
关键词
Machine Learning; Deep Learning; model interpretability; neuroimaging; VISUAL SALIENCY;
D O I
10.1109/ICHI52183.2021.00046
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Model interpretation is an active research area, aiming to unravel the black box of deep learning models. One common approach, saliency, leverages the gradients of the model to produce a per-input map highlighting the features most important for a correct prediction. However, saliency faces challenges in recurrent models due to the "vanishing saliency" problem: gradients decay significantly towards earlier time steps. We alleviate this problem and improve the quality of saliency maps by augmenting recurrent models with an attention mechanism. We validate our methodology on synthetic data and compare these results to previous work. This synthetic experiment quantitatively validate that our methodology effectively captures the underlying signal of the input data. To show that our work is valid in a real-world setting, we apply it to functional magnetic resonance imaging (fMRI) data consisting of individuals with and without a diagnosis of schizophrenia. fMRI is notoriously complicated and a perfect candidate to show that our method works even for complex, high-dimensional data. Specifically, we use our methodology to find the relevant temporal information of the subjects and connect our findings to current and past research.
引用
收藏
页码:243 / 247
页数:5
相关论文
共 50 条
  • [1] Thinking about the language models and what we can do with them
    Horvath, Roman
    JOURNAL OF LANGUAGE AND CULTURAL EDUCATION, 2023, 11 (01) : 38 - 45
  • [2] A Practical Approach to Sales Compensation: What Do We Know Now? What Should We Know in the Future?
    Chung, Doug J.
    Kim, Byungyeon
    Syam, Niladri B.
    FOUNDATIONS AND TRENDS IN MARKETING, 2020, 14 (01): : 1 - 52
  • [3] Some models are useful, but how do we know which ones? Towards a unified Bayesian model taxonomy
    Buerkner, Paul-Christian
    Scholz, Maximilian
    Radev, Stefan T.
    STATISTICS SURVEYS, 2023, 17 : 216 - 310
  • [4] Progressive Supranuclear Palsy: What Do We Know About it?
    Long, Ling
    Cai, Xiao-Dong
    Wei, Xiao-Bo
    Liao, Jin-Chi
    Xu, Yun-Qi
    Gao, Hui-Min
    Chen, Xiao-Hong
    Wang, Qing
    CURRENT MEDICINAL CHEMISTRY, 2015, 22 (10) : 1182 - 1193
  • [5] Do We Become More Prosocial as We Age, and if So, Why?
    Mayr, Ulrich
    Freund, Alexandra M.
    CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE, 2020, 29 (03) : 248 - 254
  • [6] Do people trust humans more than ChatGPT?
    Buchanan, Joy
    Hickman, William
    JOURNAL OF BEHAVIORAL AND EXPERIMENTAL ECONOMICS, 2024, 112
  • [7] Autism Spectrum Disorder: Why Do We Know So Little?
    Emberti Gialloreti, Leonardo
    Curatolo, Paolo
    FRONTIERS IN NEUROLOGY, 2018, 9
  • [8] Algorithmic discrimination in the credit domain: what do we know about it?
    Garcia, Ana Cristina Bicharra
    Garcia, Marcio Gomes Pinto
    Rigobon, Roberto
    AI & SOCIETY, 2024, 39 (04) : 2059 - 2098
  • [9] Predictive analytics in health care: how can we know it works?
    Van Calster, Ben
    Wynants, Laure
    Timmerman, Dirk
    Steyerberg, Ewout W.
    Collins, Gary S.
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2019, 26 (12) : 1651 - 1654
  • [10] The promise of functional near-infrared spectroscopy in autism research: What do we know and where do we go?
    Mazzoni, Amanda
    Grove, Rachel
    Eapen, Valsamma
    Lenroot, Rhoshel K.
    Bruggemann, Jason
    SOCIAL NEUROSCIENCE, 2019, 14 (05) : 505 - 518