Analyzing weight distribution of feedforward neural networks and efficient weight initialization

被引:0
|
作者
Go, J [1 ]
Baek, B [1 ]
Lee, C [1 ]
机构
[1] Yonsei Univ, BERC, Dept Elect & Elect Engn, Seoul 120749, South Korea
来源
STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, PROCEEDINGS | 2004年 / 3138卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we investigate and analyze the weight distribution of feedforward two-layer neural networks in order to understand and improve the time-consuming training process of neural networks. Generally, it takes a long time to train neural networks. However, when a new problem is presented, neural networks have to be trained again without any benefit from previous training. In order to address this problem, we view training process as finding a solution weight point in a weight space and analyze the distribution of solution weight points in the weight space. Then, we propose a weight initialization method that uses the information on the distribution of the solution weight points. Experimental results show that the proposed weight initialization method provides a better performance than the conventional method that uses a random generator in terms of convergence speed.
引用
收藏
页码:840 / 849
页数:10
相关论文
共 50 条
  • [21] A hybrid algorithm for weight and connectivity optimization in feedforward neural networks
    Pettersson, F
    Saxén, H
    ARTIFICIAL NEURAL NETS AND GENETIC ALGORITHMS, PROCEEDINGS, 2003, : 47 - 52
  • [22] WEIGHT DECAY AND RESOLUTION EFFECTS IN FEEDFORWARD ARTIFICIAL NEURAL NETWORKS
    MUNDIE, DB
    MASSENGILL, LW
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1991, 2 (01): : 168 - 170
  • [23] Data-driven weight initialization strategy for convolutional neural networks
    Narkhede, Meenal
    Mahajan, Shrinivas
    Bartakke, Prashant
    EVOLUTIONARY INTELLIGENCE, 2025, 18 (01)
  • [24] AutoInit: Analytic Signal-Preserving Weight Initialization for Neural Networks
    Bingham, Garrett
    Miikkulainen, Risto
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 6823 - 6833
  • [25] Solving the linear interval tolerance problem for weight initialization of neural networks
    Adam, S. P.
    Karras, D. A.
    Magoulas, G. D.
    Vrahatis, M. N.
    NEURAL NETWORKS, 2014, 54 : 17 - 37
  • [26] Variance-Aware Weight Initialization for Point Convolutional Neural Networks
    Hermosilla, Pedro
    Schelling, Michael
    Ritschel, Tobias
    Ropinski, Timo
    COMPUTER VISION - ECCV 2022, PT XXVIII, 2022, 13688 : 74 - 89
  • [27] Reducing oversmoothing through informed weight initialization in graph neural networks
    Dimitrios Kelesis
    Dimitris Fotakis
    Georgios Paliouras
    Applied Intelligence, 2025, 55 (7)
  • [28] Feedforward neural networks initialization based on discriminant learning
    Chumachenko, Kateryna
    Iosifidis, Alexandros
    Gabbouj, Moncef
    NEURAL NETWORKS, 2022, 146 : 220 - 229
  • [29] On a weight limit approach for enhancing fault tolerance of feedforward neural networks
    Kamiura, N
    Isokawa, T
    Hata, Y
    Matsui, N
    Yamato, K
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2000, E83D (11): : 1931 - 1939
  • [30] On a weight limit approach for enhancing fault tolerance of feedforward neural networks
    Kamiura, Naotake
    Isokawa, Teijiro
    Hata, Yutaka
    Masui, Nobuyuki
    Yamato, Kazuharu
    IEICE Transactions on Information and Systems, 2000, 383 -D (11) : 1931 - 1939