Deterministic convergence of conjugate gradient method for feedforward neural networks

被引:33
|
作者
Wang, Jian [1 ,2 ,3 ]
Wu, Wei [2 ]
Zurada, Jacek M. [1 ]
机构
[1] Univ Louisville, Dept Elect & Comp Engn, Louisville, KY 40292 USA
[2] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
[3] China Univ Petr, Sch Math & Computat Sci, Dongying 257061, Peoples R China
基金
中国国家自然科学基金;
关键词
Deterministic convergence; Conjugate gradient; Backpropagation; Feedforward neural networks; EXTREME LEARNING-MACHINE; ONLINE; ALGORITHM;
D O I
10.1016/j.neucom.2011.03.016
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conjugate gradient methods have many advantages in real numerical experiments, such as fast convergence and low memory requirements. This paper considers a class of conjugate gradient learning methods for backpropagation neural networks with three layers. We propose a new learning algorithm for almost cyclic learning of neural networks based on PRP conjugate gradient method. We then establish the deterministic convergence properties for three different learning modes, i.e., batch mode, cyclic and almost cyclic learning. The two deterministic convergence properties are weak and strong convergence that indicate that the gradient of the error function goes to zero and the weight sequence goes to a fixed point, respectively. It is shown that the deterministic convergence results are based on different learning modes and dependent on different selection strategies of learning rate. Illustrative numerical examples are given to support the theoretical analysis. (C) 2011 Elsevier B.V. All rights reserved.
引用
收藏
页码:2368 / 2376
页数:9
相关论文
共 50 条
  • [31] Batch gradient training method with smoothing regularization for l0 feedforward neural networks
    Zhang, Huisheng
    Tang, Yanli
    Liu, Xiaodong
    NEURAL COMPUTING & APPLICATIONS, 2015, 26 (02) : 383 - 390
  • [32] Global convergence property with inexact line search for a new conjugate gradient method
    Ben Hanachi, Sabrina
    Sellami, Badreddine
    Belloufi, Mohammed
    INTERNATIONAL JOURNAL OF OPTIMIZATION AND CONTROL-THEORIES & APPLICATIONS-IJOCTA, 2025, 15 (01): : 25 - 34
  • [33] A fast learning method for feedforward neural networks
    Wang, Shitong
    Chung, Fu-Lai
    Wang, Jun
    Wu, Jun
    NEUROCOMPUTING, 2015, 149 : 295 - 307
  • [34] Boundedness and convergence of batch back-propagation algorithm with penalty for feedforward neural networks
    Zhang, Huisheng
    Wu, Wei
    Yao, Mingchen
    NEUROCOMPUTING, 2012, 89 : 141 - 146
  • [35] On the convergence of s-dependent GFR conjugate gradient method for unconstrained optimization
    Zhao, Wenling
    Wang, Changyu
    Gu, Yajing
    NUMERICAL ALGORITHMS, 2018, 78 (03) : 721 - 738
  • [36] A conjugate gradient learning algorithm for recurrent neural networks
    Chang, WF
    Mak, MW
    NEUROCOMPUTING, 1999, 24 (1-3) : 173 - 189
  • [37] A new spectral conjugate gradient method for unconstrained optimization and its application in neural networks
    Abdulrahman, Asmaa M.
    Fathi, Bayda G.
    Najm, Huda Y.
    JOURNAL OF MATHEMATICS AND COMPUTER SCIENCE-JMCS, 2025, 36 (03): : 326 - 332
  • [38] Signature Recognition Using Conjugate Gradient Neural Networks
    Fathi, Jamal
    Hasna, Abu
    PROCEEDINGS OF WORLD ACADEMY OF SCIENCE, ENGINEERING AND TECHNOLOGY, VOL 14, 2006, 14 : 173 - 177
  • [39] Analysis on an Improved Global Convergence for a Spectral Conjugate Gradient Method
    Deng, Songhai
    Chen, Xiaohong
    Wan, Zhong
    INTERNATIONAL JOURNAL OF APPLIED MATHEMATICS & STATISTICS, 2013, 31 (01): : 20 - 26
  • [40] Some estimates of the rate of convergence for the cascadic conjugate gradient method
    Shaidurov, VV
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 1996, 31 (4-5) : 161 - 171