Deterministic convergence of conjugate gradient method for feedforward neural networks

被引:33
|
作者
Wang, Jian [1 ,2 ,3 ]
Wu, Wei [2 ]
Zurada, Jacek M. [1 ]
机构
[1] Univ Louisville, Dept Elect & Comp Engn, Louisville, KY 40292 USA
[2] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
[3] China Univ Petr, Sch Math & Computat Sci, Dongying 257061, Peoples R China
基金
中国国家自然科学基金;
关键词
Deterministic convergence; Conjugate gradient; Backpropagation; Feedforward neural networks; EXTREME LEARNING-MACHINE; ONLINE; ALGORITHM;
D O I
10.1016/j.neucom.2011.03.016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conjugate gradient methods have many advantages in real numerical experiments, such as fast convergence and low memory requirements. This paper considers a class of conjugate gradient learning methods for backpropagation neural networks with three layers. We propose a new learning algorithm for almost cyclic learning of neural networks based on PRP conjugate gradient method. We then establish the deterministic convergence properties for three different learning modes, i.e., batch mode, cyclic and almost cyclic learning. The two deterministic convergence properties are weak and strong convergence that indicate that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. It is shown that the deterministic convergence results are based on different learning modes and dependent on different selection strategies of learning rate. Illustrative numerical examples are given to support the theoretical analysis. (C) 2011 Elsevier B.V. All rights reserved.
引用
收藏
页码:2368 / 2376
页数:9
相关论文
共 50 条
  • [41] Parameter Conjugate Gradient with Secant Equation Based Elman Neural Network and its Convergence Analysis
    Fan, Qinwei
    Zhang, Zhiwen
    Huang, Xiaodi
    ADVANCED THEORY AND SIMULATIONS, 2022, 5 (09)
  • [42] Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks
    Wu, Wei
    Fan, Qinwei
    Zurada, Jacek M.
    Wang, Jian
    Yang, Dakun
    Liu, Yan
    NEURAL NETWORKS, 2014, 50 : 72 - 78
  • [43] A New Formulation for Feedforward Neural Networks
    Razavi, Saman
    Tolson, Bryan A.
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (10): : 1588 - 1598
  • [44] Adaptive Stochastic Conjugate Gradient Optimization for Backpropagation Neural Networks
    Hashem, Ibrahim Abaker Targio
    Alaba, Fadele Ayotunde
    Jumare, Muhammad Haruna
    Ibrahim, Ashraf Osman
    Abulfaraj, Anas Waleed
    IEEE ACCESS, 2024, 12 : 33757 - 33768
  • [45] Modification of Learning Feedforward Neural Networks with the BP Method
    Bilski, Jaroslaw
    Smolag, Jacek
    Najgebauer, Patryk
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING (ICAISC 2021), PT I, 2021, 12854 : 54 - 65
  • [46] A recalling-enhanced recurrent neural network: Conjugate gradient learning algorithm and its convergence analysis
    Gao, Tao
    Gong, Xiaoling
    Zhang, Kai
    Lin, Feng
    Wang, Jian
    Huang, Tingwen
    Zurada, Jacek M.
    INFORMATION SCIENCES, 2020, 519 (273-288) : 273 - 288
  • [47] A modified scaled conjugate gradient method with global convergence for nonconvex functions
    Babaie-Kafaki, Saman
    Ghanbari, Reza
    BULLETIN OF THE BELGIAN MATHEMATICAL SOCIETY-SIMON STEVIN, 2014, 21 (03) : 465 - 477
  • [48] Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks
    Fan, Qinwei
    Wu, Wei
    Zurada, Jacek M.
    SPRINGERPLUS, 2016, 5
  • [49] Convergence analyses on sparse feedforward neural networks via group lasso regularization
    Wang, Jian
    Cai, Qingling
    Chang, Qingquan
    Zurada, Jacek M.
    INFORMATION SCIENCES, 2017, 381 : 250 - 269
  • [50] A new method in determining initial weights of feedforward neural networks for training enhancement
    Yam, YF
    Chow, TWS
    Leung, CT
    NEUROCOMPUTING, 1997, 16 (01) : 23 - 32