Feature-Flow Interpretation of Deep Convolutional Neural Networks

被引:9
|
作者
Cui, Xinrui [1 ]
Wang, Dan [1 ]
Wang, Z. Jane [1 ]
机构
[1] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC V6T 1Z4, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Visualization; Computational modeling; Perturbation methods; Convolutional neural networks; Medical services; Birds; Model interpretability; feature-flow; sparse representation;
D O I
10.1109/TMM.2020.2976985
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite the great success of deep convolutional neural networks (DCNNs) in computer vision tasks, their black-box aspect remains a critical concern. The interpretability of DCNN models has been attracting increasing attention. In this work, we propose a novel model, Feature-fLOW INterpretation (FLOWIN) model, to interpret a DCNN by its feature-flow. The FLOWIN can express deep-layer features as a sparse representation of shallow-layer features. Based on that, it distills the optimal feature-flow for the prediction of a given instance, starting from deep layers to shallow layers. Therefore, the FLOWIN can provide an instance-specific interpretation, which presents its feature-flow units and their interpretable meanings for its network decision. The FLOWIN can also give the quantitative interpretation in which the contribution of each flow unit in different layers is used to interpret the net decision. From the class-level view, we can further understand networks by studying feature-flows within and between classes. The FLOWIN not only provides the visualization of the feature-flow but also studies feature-flow quantitatively by investigating its density and similarity metrics. In our experiments, the FLOWIN is evaluated on different datasets and networks by quantitative and qualitative ways to show its interpretability.
引用
收藏
页码:1847 / 1861
页数:15
相关论文
共 50 条
  • [31] On the Interpretation of Convolutional Neural Networks for Text Classification
    Xu, Jincheng
    Du, Qingfeng
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 2252 - 2259
  • [32] Deep Anchored Convolutional Neural Networks
    Huang, Jiahui
    Dwivedi, Kshitij
    Roig, Gemma
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 639 - 647
  • [33] DEEP CONVOLUTIONAL NEURAL NETWORKS FOR LVCSR
    Sainath, Tara N.
    Mohamed, Abdel-rahman
    Kingsbury, Brian
    Ramabhadran, Bhuvana
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 8614 - 8618
  • [34] Deep Unitary Convolutional Neural Networks
    Chang, Hao-Yuan
    Wang, Kang L.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT II, 2021, 12892 : 170 - 181
  • [35] Universality of deep convolutional neural networks
    Zhou, Ding-Xuan
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2020, 48 (02) : 787 - 794
  • [36] A Review on Deep Convolutional Neural Networks
    Aloysius, Neena
    Geetha, M.
    2017 INTERNATIONAL CONFERENCE ON COMMUNICATION AND SIGNAL PROCESSING (ICCSP), 2017, : 588 - 592
  • [37] Convolutional Neural Networks for Interpretation of Coronary Angiography
    Lee, Paul C.
    Lee, Nathaniel
    Pyo, Robert
    CIRCULATION, 2019, 140
  • [38] Spatial deep convolutional neural networks
    Wang, Qi
    Parker, Paul A.
    Lund, Robert
    SPATIAL STATISTICS, 2025, 66
  • [39] Convergence of deep convolutional neural networks
    Xu, Yuesheng
    Zhang, Haizhang
    NEURAL NETWORKS, 2022, 153 : 553 - 563
  • [40] Fusion of Deep Convolutional Neural Networks
    Suchy, Robert
    Ezekiel, Soundararajan
    Cornacchia, Maria
    2017 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR), 2017,