A New Correntropy-Based Conjugate Gradient Backpropagation Algorithm for Improving Training in Neural Networks

被引:68
|
作者
Heravi, Ahmad Reza [1 ]
Hodtani, Ghosheh Abed [1 ]
机构
[1] Ferdowsi Univ Mashhad, Dept Elect Engn, Mashhad 9177948974, Iran
关键词
Artificial neural networks; conjugate gradient (CG) descent; convergence; correntropy; mean square error (MSE) methods; optimization methods; GLOBAL CONVERGENCE; MINIMIZATION; CRITERION; DESCENT;
D O I
10.1109/TNNLS.2018.2827778
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Mean square error (MSE) is the most prominent criterion in training neural networks and has been employed in numerous learning problems. In this paper, we suggest a group of novel robust information theoretic backpropagation (BP) methods, as correntropy-based conjugate gradient BP (CCG-BP). CCG-BP algorithms converge faster than the common correntropy-based BP algorithms and have better performance than the common CG-BP algorithms based on MSE, especially in nonGaussian environments and in cases with impulsive noise or heavy-tailed distributions noise. In addition, a convergence analysis of this new type of method is particularly considered. Numerical results for several samples of function approximation, synthetic function estimation, and chaotic time series prediction illustrate that our new BP method is more robust than the MSE-based method in the sense of impulsive noise, especially when SNR is low.
引用
收藏
页码:6252 / 6263
页数:12
相关论文
共 50 条
  • [1] A new conjugate gradient algorithm for training neural networks based on a modified secant equation
    Livieris, Ioannis E.
    Pintelas, Panagiotis
    APPLIED MATHEMATICS AND COMPUTATION, 2013, 221 : 491 - 502
  • [2] Overfitting and neural networks: Conjugate gradient and backpropagation
    Lawrence, S
    Giles, CL
    IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL I, 2000, : 114 - 119
  • [3] Fast Correntropy-Based Clustering Algorithm
    Li Z.
    Yang B.
    Zhang J.
    Liu Y.
    Zhang X.
    Wang F.
    Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2021, 55 (06): : 121 - 130
  • [4] Adaptive Stochastic Conjugate Gradient Optimization for Backpropagation Neural Networks
    Hashem, Ibrahim Abaker Targio
    Alaba, Fadele Ayotunde
    Jumare, Muhammad Haruna
    Ibrahim, Ashraf Osman
    Abulfaraj, Anas Waleed
    IEEE ACCESS, 2024, 12 : 33757 - 33768
  • [5] A Modified Gradient-based Backpropagation Training Method for Neural Networks
    Mu, Xuewen
    Zhang, Yaling
    2009 IEEE INTERNATIONAL CONFERENCE ON GRANULAR COMPUTING ( GRC 2009), 2009, : 450 - +
  • [6] AN ADAPTIVE TRAINING ALGORITHM FOR BACKPROPAGATION NEURAL NETWORKS
    HSIN, HC
    LI, CC
    SUN, MG
    SCLABASSI, RJ
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1995, 25 (03): : 512 - 514
  • [7] A stochastic backpropagation algorithm for training neural networks
    Chen, YQ
    Yin, T
    Babri, HA
    ICICS - PROCEEDINGS OF 1997 INTERNATIONAL CONFERENCE ON INFORMATION, COMMUNICATIONS AND SIGNAL PROCESSING, VOLS 1-3: THEME: TRENDS IN INFORMATION SYSTEMS ENGINEERING AND WIRELESS MULTIMEDIA COMMUNICATIONS, 1997, : 703 - 707
  • [8] A New Two-Step Gradient-Based Backpropagation Training Method for Neural Networks
    Mu, Xuewen
    Zhang, Yaling
    ADVANCES IN NEURAL NETWORKS - ISNN 2010, PT 1, PROCEEDINGS, 2010, 6063 : 95 - +
  • [9] IMPROVEMENT OF THE BACKPROPAGATION ALGORITHM FOR TRAINING NEURAL NETWORKS
    LEONARD, J
    KRAMER, MA
    COMPUTERS & CHEMICAL ENGINEERING, 1990, 14 (03) : 337 - 341
  • [10] Adaptive nonmonotone conjugate gradient training algorithm for recurrent neural networks
    Peng, Chun-Cheng
    Magoulas, George D.
    19TH IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, VOL II, PROCEEDINGS, 2007, : 374 - 381