Neural self-compressor: Collective interpretation by compressing multi-layered neural networks into non-layered networks

被引:14
|
作者
Kamimura, Ryotaro [1 ]
机构
[1] Tokai Univ, IT Educ Ctr, 4-1-1 Kitakaname, Hiratsuka, Kanagawa 2591292, Japan
基金
日本学术振兴会;
关键词
Model compression; self-compression; collective interpretation; mutual informaton; multi-layered neural networks; INFORMATION MAXIMIZATION; MUTUAL INFORMATION; RULE EXTRACTION;
D O I
10.1016/j.neucom.2018.09.036
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The present paper proposes a new method called "neural self-compressors" to compress multi-layered neural networks into the simplest possible ones (i.e., without hidden layers) to aid in the interpretation of relations between inputs and outputs. Though neural networks have shown great success in improving generalization, the interpretation of internal representations becomes a serious problem as the number of hidden layers and their corresponding connection weights becomes larger and larger. To overcome this interpretation problem, we introduce a method that compresses multi-layered neural networks into ones without hidden layers. In addition, this method simplifies entangled weights as much as possible by maximizing mutual information between inputs and outputs. In this way, final connection weights can be interpreted as easily as by the logistic regression analysis. The method was applied to four data sets: a symmetric data set, ovarian cancer data set, restaurant data set, and credit card holders' default data set. In the first set, the symmetric data set, we tried to explain how the present method could produce interpretable outputs intuitively. In all the other cases, we succeeded in compressing multi-layered neural networks into their simplest forms with the help of mutual information maximization. In addition, by de-correlating outputs, we were able to transform connection weights from those close to the regression coefficients to ones with more explicit features. (C) 2018 Elsevier B.V. All rights reserved.
引用
收藏
页码:12 / 36
页数:25
相关论文
共 20 条
  • [1] Improving collective interpretation by extended potentiality assimilation for multi-layered neural networks
    Kamimura, Ryotaro
    Takeuchi, Haruhiko
    CONNECTION SCIENCE, 2020, 32 (02) : 174 - 203
  • [2] Repeated Potentiality Augmentation for Multi-layered Neural Networks
    Kamimura, Ryotaro
    ADVANCES IN INFORMATION AND COMMUNICATION, FICC, VOL 2, 2023, 652 : 117 - 134
  • [3] Interpreting Collectively Compressed Multi-Layered Neural Networks
    Kamimura, Ryotaro
    PROCEEDINGS OF THE IEEE 2019 9TH INTERNATIONAL CONFERENCE ON CYBERNETICS AND INTELLIGENT SYSTEMS (CIS) ROBOTICS, AUTOMATION AND MECHATRONICS (RAM) (CIS & RAM 2019), 2019, : 95 - 100
  • [4] Excessive, Selective and Collective Information Processing to Improve and Interpret Multi-layered Neural Networks
    Kamimura, Ryotaro
    Takeuchi, Haruhiko
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 1, 2019, 868 : 664 - 675
  • [5] Information-Theoretic Self-compression of Multi-layered Neural Networks
    Kamimura, Ryotaro
    THEORY AND PRACTICE OF NATURAL COMPUTING (TPNC 2018), 2018, 11324 : 401 - 413
  • [6] Connective Potential Information for Collectively Interpreting Multi-Layered Neural Networks
    Kamimura, Ryotaro
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 3033 - 3042
  • [7] SUPPOSED MAXIMUM MUTUAL INFORMATION FOR IMPROVING GENERALIZATION AND INTERPRETATION OF MULTI-LAYERED NEURAL NETWORKS
    Kamimura, Ryotaro
    JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH, 2019, 9 (02) : 123 - 147
  • [8] Local Selective Learning for Interpreting Multi-Layered Neural Networks
    Kamimura, Ryotaro
    Kitajima, Ryozo
    Sakai, Hiroyuki
    2018 JOINT 10TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS (SCIS) AND 19TH INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (ISIS), 2018, : 115 - 122
  • [9] Impartial competitive learning in multi-layered neural networks
    Kamimura, Ryotaro
    CONNECTION SCIENCE, 2023, 35 (01)
  • [10] Multi-Layered Neural Networks with Learning of Output Functions
    Ma, Lixin
    Miyajima, Hiromi
    Shigei, Noritaka
    INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2006, 6 (3A): : 140 - 145