Convergence of an online gradient method for feedforward neural networks with stochastic inputs

被引:24
作者
Li, ZX
Wu, W [1 ]
Tian, YL
机构
[1] Dalian Univ Technol, Dept Appl Math, Dalian 116023, Peoples R China
[2] Huazhong Univ Sci & Technol, Wuhan 430000, Peoples R China
基金
中国国家自然科学基金;
关键词
feedforward neural networks; online gradient method; convergence; stochastic inputs;
D O I
10.1016/j.cam.2003.08.062
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
In this paper, we study the convergence of an online gradient method for feed-forward neural networks. The input training examples are permuted stochastically in each cycle of iteration. A monotonicity and a weak convergence of deterministic nature are proved. (C) 2003 Elsevier B.V. All rights reserved.
引用
收藏
页码:165 / 176
页数:12
相关论文
共 50 条
  • [21] CONVERGENCE OF GRADIENT METHOD FOR DOUBLE PARALLEL FEEDFORWARD NEURAL NETWORK
    Wang, Jian
    Wu, Wei
    Li, Zhengxue
    Li, Long
    INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING, 2011, 8 (03) : 484 - 495
  • [22] Convergence Analysis of Online Gradient Method for High-Order Neural Networks and Their Sparse Optimization
    Fan, Qinwei
    Kang, Qian
    Zurada, Jacek M.
    Huang, Tingwen
    Xu, Dongpo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 18687 - 18701
  • [23] Convergence of Cyclic and Almost-Cyclic Learning with Momentum for Feedforward Neural Networks
    Wang, Jian
    Yang, Jie
    Wu, Wei
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (08): : 1297 - 1306
  • [24] Convergence analyses on sparse feedforward neural networks via group lasso regularization
    Wang, Jian
    Cai, Qingling
    Chang, Qingquan
    Zurada, Jacek M.
    INFORMATION SCIENCES, 2017, 381 : 250 - 269
  • [25] Learning in neural networks by normalized stochastic gradient algorithm: Local convergence
    Tadic, V
    Stankovic, S
    NEUREL 2000: PROCEEDINGS OF THE 5TH SEMINAR ON NEURAL NETWORK APPLICATIONS IN ELECTRICAL ENGINEERING, 2000, : 11 - 17
  • [26] Convergence Analysis of Interval Feedforward Neural Networks
    Guan, Shouping
    Liang, Yue
    Yu, Xiaoyu
    2022 34TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2022, : 3797 - 3801
  • [27] Batch gradient training method with smoothing regularization for l0 feedforward neural networks
    Zhang, Huisheng
    Tang, Yanli
    Liu, Xiaodong
    NEURAL COMPUTING & APPLICATIONS, 2015, 26 (02) : 383 - 390
  • [28] Boundedness and convergence of batch back-propagation algorithm with penalty for feedforward neural networks
    Zhang, Huisheng
    Wu, Wei
    Yao, Mingchen
    NEUROCOMPUTING, 2012, 89 : 141 - 146
  • [29] Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks
    Wu, Wei
    Fan, Qinwei
    Zurada, Jacek M.
    Wang, Jian
    Yang, Dakun
    Liu, Yan
    NEURAL NETWORKS, 2014, 50 : 72 - 78
  • [30] Stabilization and speedup of convergence in training feedforward neural networks
    Looney, CG
    NEUROCOMPUTING, 1996, 10 (01) : 7 - 31