The Convergence of Incremental Neural Networks

被引:0
作者
Chen, Lei [1 ]
Wang, Yilin [2 ]
Zhang, Lixiao [1 ]
Chen, Wei [1 ]
机构
[1] Midea Grp, Kitchen & Water Heater Applicance, Foshan, Peoples R China
[2] Shenyang Aerosp Univ, Coll Aeronaut & Astronaut, Shenyang, Peoples R China
关键词
Feedforward neural network; Convergence rate; Universal approximation; Random search; MULTILAYER FEEDFORWARD NETWORKS; EXTREME LEARNING-MACHINE; APPROXIMATION CAPABILITY; BOUNDS; RATES;
D O I
10.1007/s11063-023-11429-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The investigation of neural network convergence represents a pivotal and indispensable area of research, as it plays a crucial role in unraveling the universal approximation capability and the intricate structural complexity inherent in these systems. In this study, we delve into an innovative and generalized convex incremental iteration method, which surpasses previous studies by offering a more expansive formulation capable of encompassing a broader range of weight parameters. Moreover, we rigorously and systematically demonstrate the convergence rate of this convex iteration technique, shedding light on its reliability and effectiveness. Furthermore, we adopt a discrete statistical perspective to effectively tackle the challenges arising from the non-compactness of input data and the inherent unknowability of the objective function in practical settings, thereby enhancing the robustness and applicability of our research. To support our conclusions, we introduce two implementation algorithms, namely back propagation and random search. The latter algorithm plays a vital role in preventing the neural network from becoming entrapped in suboptimal local minima during the training process. Finally, we present comprehensive results obtained from a variety of regression problems, which not only serve as empirical evidence of the superior performance of our algorithms but also validate their alignment with our theoretical predictions. These results contribute significantly to the advancement of our understanding of neural network convergence and its profound implications for the universal approximation capability inherent in these complex systems.
引用
收藏
页码:12481 / 12499
页数:19
相关论文
共 50 条
  • [31] Convergence rates of training deep neural networks via alternating minimization methods
    Xu, Jintao
    Bao, Chenglong
    Xing, Wenxun
    OPTIMIZATION LETTERS, 2024, 18 (04) : 909 - 923
  • [32] Convergence rates of training deep neural networks via alternating minimization methods
    Jintao Xu
    Chenglong Bao
    Wenxun Xing
    Optimization Letters, 2024, 18 : 909 - 923
  • [33] Convergence of gradient method with momentum for two-layer feedforward neural networks
    Zhang, NM
    Wu, W
    Zheng, GF
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (02): : 522 - 525
  • [34] Global exponential convergence of delayed inertial Cohen-Grossberg neural networks
    Wu, Yanqiu
    Dai, Nina
    Tu, Zhengwen
    Wang, Liangwei
    Tang, Qian
    NONLINEAR ANALYSIS-MODELLING AND CONTROL, 2023, 28 (06): : 1062 - 1076
  • [35] New study on neural networks: The essential order of approximation
    Wang, Jianjun
    Xu, Zongben
    NEURAL NETWORKS, 2010, 23 (05) : 618 - 624
  • [36] The errors of simultaneous approximation of multivariate functions by neural networks
    Xie, Tingfan
    Cao, Feilong
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2011, 61 (10) : 3146 - 3152
  • [37] Full error analysis for the training of deep neural networks
    Beck, Christian
    Jentzen, Arnulf
    Kuckuck, Benno
    INFINITE DIMENSIONAL ANALYSIS QUANTUM PROBABILITY AND RELATED TOPICS, 2022, 25 (02)
  • [38] Mathematical Neural Networks
    Garcia Cabello, Julia
    AXIOMS, 2022, 11 (02)
  • [39] CONVERGENCE OF ONLINE GRADIENT METHOD WITH A PENALTY TERM FOR FEEDFORWARD NEURAL NETWORKS WITH STOCHASTIC INPUTS
    邵红梅
    吴微
    李峰
    NumericalMathematicsAJournalofChineseUniversities(EnglishSeries), 2005, (01) : 87 - 96
  • [40] Global exponential stability for delayed cellular neural networks and estimate of exponential convergence rate
    Zhang Qiang
    Advanced Design Technology Center
    School of Management
    JournalofSystemsEngineeringandElectronics, 2004, (03) : 344 - 349