Repeated Potentiality Augmentation for Multi-layered Neural Networks

被引:0
作者
Kamimura, Ryotaro [1 ,2 ]
机构
[1] Tokai Univ, 2880 Kamimatsuo Nishi Ku, Kumamoto 8615289, Japan
[2] Kumamoto Drone Technol & Dev Fdn, 2880 Kamimatsuo Nishi Ku, Kumamoto 8615289, Japan
来源
ADVANCES IN INFORMATION AND COMMUNICATION, FICC, VOL 2 | 2023年 / 652卷
关键词
Equi-potentiality; Total potentiality; Relative potentiality; Collective interpretation; Partial interpretation; MUTUAL INFORMATION; LEARNING-MODELS; CLASSIFICATION; MAXIMIZE; INPUT; MAPS;
D O I
10.1007/978-3-031-28073-3_9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The present paper proposes a new method to augment the potentiality of components in neural networks. The basic hypothesis is that all components should have equal potentiality (equi-potentiality) to be used for learning. This equi-potentiality of components has implicitly played critical roles in improving multi-layered neural networks. We introduce here the total potentiality and relative potentiality for each hidden layer, and we try to force networks to increase the potentiality as much as possible to realize the equi-potentiality. In addition, the potentiality augmentation is repeated at any time the potentiality tends to decrease, which is used to increase the chance for any components to be used as equally as possible. We applied the method to the bankruptcy data set. By keeping the equi-potentiality of components by repeating the process of potentiality augmentation and reduction, we could see improved generalization. Then, by considering all possible representations by the repeated potentiality augmentation, we can interpret which inputs can contribute to the final performance of networks.
引用
收藏
页码:117 / 134
页数:18
相关论文
共 56 条
  • [1] Nguyen A, 2016, ADV NEUR IN, V29
  • [2] Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space
    Anh Nguyen
    Clune, Jeff
    Bengio, Yoshua
    Dosovitskiy, Alexey
    Yosinski, Jason
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3510 - 3520
  • [3] Identifying Individual Facial Expressions by Deconstructing a Neural Network
    Arbabzadah, Farhad
    Montavon, Gregoire
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PATTERN RECOGNITION, GCPR 2016, 2016, 9796 : 344 - 354
  • [4] Baehrens D, 2010, J MACH LEARN RES, V11, P1803
  • [5] Bai Y., 2022, arXiv
  • [6] Frequency-sensitive competitive learning for scalable balanced clustering on high-dimensional hyperspheres
    Banerjee, A
    Ghosh, J
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2004, 15 (03): : 702 - 719
  • [7] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [8] Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers
    Binder, Alexander
    Montavon, Gregoire
    Lapuschkin, Sebastian
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2016, PT II, 2016, 9887 : 63 - 71
  • [9] Bogdan M., 2001, 9th European Symposium on Artificial Neural Networks. ESANN'2001. Proceedings, P131
  • [10] Automatic cluster detection in Kohonen's SO-M
    Brugger, Dominik
    Bogdan, Martin
    Rosenstiel, Wolfgang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (03): : 442 - 459