Deterministic convergence of conjugate gradient method for feedforward neural networks

被引:33
|
作者
Wang, Jian [1 ,2 ,3 ]
Wu, Wei [2 ]
Zurada, Jacek M. [1 ]
机构
[1] Univ Louisville, Dept Elect & Comp Engn, Louisville, KY 40292 USA
[2] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
[3] China Univ Petr, Sch Math & Computat Sci, Dongying 257061, Peoples R China
基金
中国国家自然科学基金;
关键词
Deterministic convergence; Conjugate gradient; Backpropagation; Feedforward neural networks; EXTREME LEARNING-MACHINE; ONLINE; ALGORITHM;
D O I
10.1016/j.neucom.2011.03.016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conjugate gradient methods have many advantages in real numerical experiments, such as fast convergence and low memory requirements. This paper considers a class of conjugate gradient learning methods for backpropagation neural networks with three layers. We propose a new learning algorithm for almost cyclic learning of neural networks based on PRP conjugate gradient method. We then establish the deterministic convergence properties for three different learning modes, i.e., batch mode, cyclic and almost cyclic learning. The two deterministic convergence properties are weak and strong convergence that indicate that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. It is shown that the deterministic convergence results are based on different learning modes and dependent on different selection strategies of learning rate. Illustrative numerical examples are given to support the theoretical analysis. (C) 2011 Elsevier B.V. All rights reserved.
引用
收藏
页码:2368 / 2376
页数:9
相关论文
共 50 条
  • [21] Stabilization and speedup of convergence in training feedforward neural networks
    Looney, CG
    NEUROCOMPUTING, 1996, 10 (01) : 7 - 31
  • [22] GLOBAL CONVERGENCE OF A MODIFIED CONJUGATE GRADIENT METHOD
    Li, Can
    Fang, Ling
    Lu, Peng
    2012 INTERNATIONAL CONFERENCE ON WAVELET ACTIVE MEDIA TECHNOLOGY AND INFORMATION PROCESSING (LCWAMTIP), 2012, : 78 - 81
  • [23] Feedforward Neural Networks with a Hidden Layer Regularization Method
    Alemu, Habtamu Zegeye
    Wu, Wei
    Zhao, Junhong
    SYMMETRY-BASEL, 2018, 10 (10):
  • [24] An Efficient Elman Neural Networks Based on Improved Conjugate Gradient Method with Generalized Armijo Search
    Zhu, Mingyue
    Gao, Tao
    Zhang, Bingjie
    Sun, Qingying
    Wang, Jian
    INTELLIGENT COMPUTING THEORIES AND APPLICATION, PT I, 2018, 10954 : 1 - 7
  • [25] Online gradient method with smoothing l0 regularization for feedforward neural networks
    Zhang, Huisheng
    Tang, Yanli
    NEUROCOMPUTING, 2017, 224 : 1 - 8
  • [26] An Improved Conjugate Gradient Neural Networks Based on a Generalized Armijo Search Method
    Zhang, Bingjie
    Gao, Tao
    Li, Long
    Sun, Zhanquan
    Wang, Jian
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT IV, 2017, 10637 : 131 - 139
  • [27] Enhanced Conjugate Gradient Method for Unconstrained Optimization and Its Application in Neural Networks
    Omar, Dlovan Haji
    Ibrahim, Alaa Luqman
    Hassan, Masoud Muhammed
    Fathi, Bayda Ghanim
    Sulaiman, Diman Abdulqader
    EUROPEAN JOURNAL OF PURE AND APPLIED MATHEMATICS, 2024, 17 (04): : 2692 - 2705
  • [28] The convergence of conjugate gradient method with nonmonotone line search
    Shi, Zhen-Jun
    Wang, Shengquan
    Xu, Zhiwei
    APPLIED MATHEMATICS AND COMPUTATION, 2010, 217 (05) : 1921 - 1932
  • [29] The deflated conjugate gradient method: Convergence, perturbation and accuracy
    Kahl, K.
    Rittich, H.
    LINEAR ALGEBRA AND ITS APPLICATIONS, 2017, 515 : 111 - 129
  • [30] A Modified Spectral Conjugate Gradient Method with Global Convergence
    Faramarzi, Parvaneh
    Amini, Keyvan
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2019, 182 (02) : 667 - 690