Boundedness and Convergence of Online Gradient Method with Penalty for Linear Output Feedforward Neural Networks

被引:0
|
作者
Huisheng Zhang
Wei Wu
机构
[1] Dalian University of Technology,Department of Applied Mathematics
[2] Dalian Maritime University,Department of Mathematics
来源
Neural Processing Letters | 2009年 / 29卷
关键词
Feedforward neural networks; Linear output; Online gradient method; Penalty; Boundedness; Convergence;
D O I
暂无
中图分类号
学科分类号
摘要
This paper investigates an online gradient method with penalty for training feedforward neural networks with linear output. A usual penalty is considered, which is a term proportional to the norm of the weights. The main contribution of this paper is to theoretically prove the boundedness of the weights in the network training process. This boundedness is then used to prove an almost sure convergence of the algorithm to the zero set of the gradient of the error function.
引用
收藏
页码:205 / 212
页数:7
相关论文
共 50 条
  • [21] Convergence of an online gradient method with inner-product penalty and adaptive momentum
    Shao, Hongmei
    Xu, Dongpo
    Zheng, Gaofeng
    Liu, Lijun
    NEUROCOMPUTING, 2012, 77 (01) : 243 - 252
  • [22] Deterministic convergence of an online gradient method for BP neural networks
    Wu, W
    Feng, GR
    Li, ZX
    Xu, YS
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2005, 16 (03): : 533 - 540
  • [23] Convergence analysis of online gradient method for BP neural networks
    Wu, Wei
    Wang, Jian
    Cheng, Mingsong
    Li, Zhengxue
    NEURAL NETWORKS, 2011, 24 (01) : 91 - 98
  • [24] Convergence of gradient method with momentum for two-layer feedforward neural networks
    Zhang, NM
    Wu, W
    Zheng, GF
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (02): : 522 - 525
  • [25] Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks
    Huisheng Zhang
    Ying Zhang
    Dongpo Xu
    Xiaodong Liu
    Cognitive Neurodynamics, 2015, 9 : 331 - 340
  • [26] Convergence Analysis of Online Gradient Method for High-Order Neural Networks and Their Sparse Optimization
    Fan, Qinwei
    Kang, Qian
    Zurada, Jacek M.
    Huang, Tingwen
    Xu, Dongpo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 15
  • [27] Convergence Analysis of Online Gradient Method for High-Order Neural Networks and Their Sparse Optimization
    Fan, Qinwei
    Kang, Qian
    Zurada, Jacek M.
    Huang, Tingwen
    Xu, Dongpo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 18687 - 18701
  • [28] Convergence analyses on sparse feedforward neural networks via group lasso regularization
    Wang, Jian
    Cai, Qingling
    Chang, Qingquan
    Zurada, Jacek M.
    INFORMATION SCIENCES, 2017, 381 : 250 - 269
  • [29] Convergence of gradient descent algorithm with penalty term for recurrent neural networks
    Ding, Xiaoshuai
    Wang, Kuaini
    International Journal of Multimedia and Ubiquitous Engineering, 2014, 9 (09): : 151 - 158
  • [30] Batch gradient training method with smoothing regularization for l0 feedforward neural networks
    Zhang, Huisheng
    Tang, Yanli
    Liu, Xiaodong
    NEURAL COMPUTING & APPLICATIONS, 2015, 26 (02) : 383 - 390