Boundedness and Convergence of Online Gradient Method with Penalty for Linear Output Feedforward Neural Networks

被引:0
作者
Huisheng Zhang
Wei Wu
机构
[1] Dalian University of Technology,Department of Applied Mathematics
[2] Dalian Maritime University,Department of Mathematics
来源
Neural Processing Letters | 2009年 / 29卷
关键词
Feedforward neural networks; Linear output; Online gradient method; Penalty; Boundedness; Convergence;
D O I
暂无
中图分类号
学科分类号
摘要
This paper investigates an online gradient method with penalty for training feedforward neural networks with linear output. A usual penalty is considered, which is a term proportional to the norm of the weights. The main contribution of this paper is to theoretically prove the boundedness of the weights in the network training process. This boundedness is then used to prove an almost sure convergence of the algorithm to the zero set of the gradient of the error function.
引用
收藏
页码:205 / 212
页数:7
相关论文
共 50 条
  • [31] Convergence of Batch Gradient Method Based on the Entropy Error Function for Feedforward Neural Networks
    Yan Xiong
    Xin Tong
    Neural Processing Letters, 2020, 52 : 2687 - 2695
  • [32] Convergence of Batch Gradient Method Based on the Entropy Error Function for Feedforward Neural Networks
    Xiong, Yan
    Tong, Xin
    NEURAL PROCESSING LETTERS, 2020, 52 (03) : 2687 - 2695
  • [33] CONVERGENCE OF GRADIENT METHOD FOR DOUBLE PARALLEL FEEDFORWARD NEURAL NETWORK
    Wang, Jian
    Wu, Wei
    Li, Zhengxue
    Li, Long
    INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, 2011, 8 (03) : 484 - 495
  • [34] Convergence of gradient method with penalty for Ridge Polynomial neural network
    Yu, Xin
    Chen, Qingfeng
    NEUROCOMPUTING, 2012, 97 : 405 - 409
  • [35] Relaxed conditions for convergence of batch BPAP for feedforward neural networks
    Shao, Hongmei
    Wang, Jian
    Liu, Lijun
    Xu, Dongpo
    Bao, Wendi
    NEUROCOMPUTING, 2015, 153 : 174 - 179
  • [36] Convergence of gradient descent for learning linear neural networks
    Nguegnang, Gabin Maxime
    Rauhut, Holger
    Terstiege, Ulrich
    ADVANCES IN CONTINUOUS AND DISCRETE MODELS, 2024, 2024 (01):
  • [37] Boundedness and convergence analysis of weight elimination for cyclic training of neural networks
    Wang, Jian
    Ye, Zhenyun
    Gao, Weifeng
    Zurada, Jacek M.
    NEURAL NETWORKS, 2016, 82 : 49 - 61
  • [38] Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks
    Wu, Wei
    Fan, Qinwei
    Zurada, Jacek M.
    Wang, Jian
    Yang, Dakun
    Liu, Yan
    NEURAL NETWORKS, 2014, 50 : 72 - 78
  • [40] Convergence of Cyclic and Almost-Cyclic Learning with Momentum for Feedforward Neural Networks
    Wang, Jian
    Yang, Jie
    Wu, Wei
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (08): : 1297 - 1306