The Convergence of Incremental Neural Networks

被引:0
作者
Chen, Lei [1 ]
Wang, Yilin [2 ]
Zhang, Lixiao [1 ]
Chen, Wei [1 ]
机构
[1] Midea Grp, Kitchen & Water Heater Applicance, Foshan, Peoples R China
[2] Shenyang Aerosp Univ, Coll Aeronaut & Astronaut, Shenyang, Peoples R China
关键词
Feedforward neural network; Convergence rate; Universal approximation; Random search; MULTILAYER FEEDFORWARD NETWORKS; EXTREME LEARNING-MACHINE; APPROXIMATION CAPABILITY; BOUNDS; RATES;
D O I
10.1007/s11063-023-11429-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The investigation of neural network convergence represents a pivotal and indispensable area of research, as it plays a crucial role in unraveling the universal approximation capability and the intricate structural complexity inherent in these systems. In this study, we delve into an innovative and generalized convex incremental iteration method, which surpasses previous studies by offering a more expansive formulation capable of encompassing a broader range of weight parameters. Moreover, we rigorously and systematically demonstrate the convergence rate of this convex iteration technique, shedding light on its reliability and effectiveness. Furthermore, we adopt a discrete statistical perspective to effectively tackle the challenges arising from the non-compactness of input data and the inherent unknowability of the objective function in practical settings, thereby enhancing the robustness and applicability of our research. To support our conclusions, we introduce two implementation algorithms, namely back propagation and random search. The latter algorithm plays a vital role in preventing the neural network from becoming entrapped in suboptimal local minima during the training process. Finally, we present comprehensive results obtained from a variety of regression problems, which not only serve as empirical evidence of the superior performance of our algorithms but also validate their alignment with our theoretical predictions. These results contribute significantly to the advancement of our understanding of neural network convergence and its profound implications for the universal approximation capability inherent in these complex systems.
引用
收藏
页码:12481 / 12499
页数:19
相关论文
共 50 条
  • [41] On Approximation by Neural Networks with Optimized Activation Functions and Fixed Weights
    Yu, Dansheng
    Qian, Yunyou
    Li, Fengjun
    ANALYSIS IN THEORY AND APPLICATIONS, 2023, 39 (01): : 93 - 104
  • [42] Convergence and gradient algorithm of a class of neural networks based on the polygonal fuzzy numbers representation
    Sun, Gang
    Wang, Mingxin
    Li, Xiaoping
    COMPLEX & INTELLIGENT SYSTEMS, 2022, 8 (03) : 2383 - 2404
  • [43] Efficient Approximation of High-Dimensional Functions With Neural Networks
    Cheridito, Patrick
    Jentzen, Arnulf
    Rossmannek, Florian
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (07) : 3079 - 3093
  • [44] Two noise tolerant incremental learning algorithms for single layer feed-forward neural networks
    Adegoke M.
    Wong H.T.
    Leung A.C.S.
    Sum J.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (12) : 15643 - 15657
  • [45] Neural networks in Frechet spaces
    Benth, Fred Espen
    Detering, Nils
    Galimberti, Luca
    ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE, 2023, 91 (01) : 75 - 103
  • [46] Significance Tests for Neural Networks
    Horel, Enguerrand
    Giesecke, Kay
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [47] Classification by Sparse Neural Networks
    Kurkova, Vera
    Sanguineti, Marcello
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) : 2746 - 2754
  • [48] Learning and Convergence of the Normalized Radial Basis Functions Networks
    Krzyzak, Adam
    Partyka, Marian
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2018, PT I, 2018, 10841 : 118 - 129
  • [49] On Learning and Convergence of RBF Networks in Regression Estimation and Classification
    Krzyzak, Adam
    Partyka, Marian
    ARTIFICIAL INTELLIGENCEAND SOFT COMPUTING, PT I, 2019, 11508 : 131 - 142
  • [50] Convergence of RBF Networks Regression Function Estimates and Classifiers
    Krzyzak, Adam
    Galkowski, Tomasz
    Partyka, Marian
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2022, PT I, 2023, 13588 : 363 - 376