Repeated Potentiality Augmentation for Multi-layered Neural Networks

被引:0
作者
Kamimura, Ryotaro [1 ,2 ]
机构
[1] Tokai Univ, 2880 Kamimatsuo Nishi Ku, Kumamoto 8615289, Japan
[2] Kumamoto Drone Technol & Dev Fdn, 2880 Kamimatsuo Nishi Ku, Kumamoto 8615289, Japan
来源
ADVANCES IN INFORMATION AND COMMUNICATION, FICC, VOL 2 | 2023年 / 652卷
关键词
Equi-potentiality; Total potentiality; Relative potentiality; Collective interpretation; Partial interpretation; MUTUAL INFORMATION; LEARNING-MODELS; CLASSIFICATION; MAXIMIZE; INPUT; MAPS;
D O I
10.1007/978-3-031-28073-3_9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The present paper proposes a new method to augment the potentiality of components in neural networks. The basic hypothesis is that all components should have equal potentiality (equi-potentiality) to be used for learning. This equi-potentiality of components has implicitly played critical roles in improving multi-layered neural networks. We introduce here the total potentiality and relative potentiality for each hidden layer, and we try to force networks to increase the potentiality as much as possible to realize the equi-potentiality. In addition, the potentiality augmentation is repeated at any time the potentiality tends to decrease, which is used to increase the chance for any components to be used as equally as possible. We applied the method to the bankruptcy data set. By keeping the equi-potentiality of components by repeating the process of potentiality augmentation and reduction, we could see improved generalization. Then, by considering all possible representations by the repeated potentiality augmentation, we can interpret which inputs can contribute to the final performance of networks.
引用
收藏
页码:117 / 134
页数:18
相关论文
共 56 条
  • [21] Fritzke B., 1996, APPL COMPUTING 1996, P624
  • [22] Goodfellow I, 2016, ADAPT COMPUT MACH LE, P221
  • [23] Goodman B, 2016, Arxiv, DOI [arXiv:1606.08813, DOI 10.1609/AIMAG.V38I3.2741]
  • [24] A SOM based cluster visualization and its application for false coloring
    Himberg, J
    [J]. IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL III, 2000, : 587 - 592
  • [25] Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks
    Khan, J
    Wei, JS
    Ringnér, M
    Saal, LH
    Ladanyi, M
    Westermann, F
    Berthold, F
    Schwab, M
    Antonescu, CR
    Peterson, C
    Meltzer, PS
    [J]. NATURE MEDICINE, 2001, 7 (06) : 673 - 679
  • [26] Kohonen T., 1995, Self-Organizing Maps, DOI [DOI 10.1007/978-3-642-56927-2, 10.1007/978-3- 642- 56927-2]
  • [27] KROGH A, 1992, ADV NEUR IN, V4, P950
  • [28] Kukacka J., 2017, arXiv, DOI 10.48550/arXiv.1710.10686
  • [29] Analyzing Classifiers: Fisher Vectors and Deep Neural Networks
    Lapuschkin, Sebastian
    Binder, Alexander
    Montavon, Gregoire
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2912 - 2920
  • [30] Maximization of mutual information for supervised linear feature extraction
    Leiva-Murillo, Jose Miguel
    Artes-Rodriguez, Antonio
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2007, 18 (05): : 1433 - 1441