Detecting Information Relays in Deep Neural Networks

被引:2
作者
Hintze, Arend [1 ,2 ]
Adami, Christoph [2 ,3 ,4 ]
机构
[1] Dalarna Univ, Dept MicroData Analyt, S-79131 Falun, Sweden
[2] Michigan State Univ, BEACON Ctr Study Evolut Act, E Lansing, MI 48824 USA
[3] Michigan State Univ, Dept Microbiol & Mol Genet, E Lansing, MI 48824 USA
[4] Michigan State Univ, Program Evolut Ecol & Behav, E Lansing, MI 48824 USA
基金
美国国家科学基金会;
关键词
information theory; deep learning; relay;
D O I
10.3390/e25030401
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Deep learning of artificial neural networks (ANNs) is creating highly functional processes that are, unfortunately, nearly as hard to interpret as their biological counterparts. Identification of functional modules in natural brains plays an important role in cognitive and neuroscience alike, and can be carried out using a wide range of technologies such as fMRI, EEG/ERP, MEG, or calcium imaging. However, we do not have such robust methods at our disposal when it comes to understanding functional modules in artificial neural networks. Ideally, understanding which parts of an artificial neural network perform what function might help us to address a number of vexing problems in ANN research, such as catastrophic forgetting and overfitting. Furthermore, revealing a network's modularity could improve our trust in them by making these black boxes more transparent. Here, we introduce a new information-theoretic concept that proves useful in understanding and analyzing a network's functional modularity: the relay information IR. The relay information measures how much information groups of neurons that participate in a particular function (modules) relay from inputs to outputs. Combined with a greedy search algorithm, relay information can be used to identify computational modules in neural networks. We also show that the functionality of modules correlates with the amount of relay information they carry.
引用
收藏
页数:18
相关论文
共 47 条
  • [1] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [2] On directed information theory and Granger causality graphs
    Amblard, Pierre-Olivier
    Michel, Olivier J. J.
    [J]. JOURNAL OF COMPUTATIONAL NEUROSCIENCE, 2011, 30 (01) : 7 - 16
  • [3] Predictive information and explorative behavior of autonomous robots
    Ay, N.
    Bertschinger, N.
    Der, R.
    Guettler, F.
    Olbrich, E.
    [J]. EUROPEAN PHYSICAL JOURNAL B, 2008, 63 (03) : 329 - 339
  • [4] Basharin G., 1959, THEORY PROBAB ITS AP, V4, P333, DOI [10.1137/1104033, DOI 10.1137/1104033]
  • [5] Predictability, complexity, and learning
    Bialek, W
    Nemenman, I
    Tishby, N
    [J]. NEURAL COMPUTATION, 2001, 13 (11) : 2409 - 2463
  • [6] Information Fragmentation, Encryption and Information Flow in Complex Biological Networks
    Bohm, Clifford
    Kirkpatrick, Douglas
    Cao, Victoria
    Adami, Christoph
    [J]. ENTROPY, 2022, 24 (05)
  • [7] Understanding Memories of the Past in the Context of Different Complex Neural Network Architectures
    Bohm, Clifford
    Kirkpatrick, Douglas
    Hintze, Arend
    [J]. NEURAL COMPUTATION, 2022, 34 (03) : 754 - 780
  • [8] Information theory and neural coding
    Borst, A
    Theunissen, FE
    [J]. NATURE NEUROSCIENCE, 1999, 2 (11) : 947 - 957
  • [9] Castelvecchi D, 2016, NATURE, V537, P20, DOI [10.1038/nature.2016.20491, 10.1038/538020a]
  • [10] Chapman S., 2013, ADV ARTIFICIAL LIFE, V12, P1067