Convergence of an online gradient method for feedforward neural networks with stochastic inputs

被引:24
作者
Li, ZX
Wu, W [1 ]
Tian, YL
机构
[1] Dalian Univ Technol, Dept Appl Math, Dalian 116023, Peoples R China
[2] Huazhong Univ Sci & Technol, Wuhan 430000, Peoples R China
基金
中国国家自然科学基金;
关键词
feedforward neural networks; online gradient method; convergence; stochastic inputs;
D O I
10.1016/j.cam.2003.08.062
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
In this paper, we study the convergence of an online gradient method for feed-forward neural networks. The input training examples are permuted stochastically in each cycle of iteration. A monotonicity and a weak convergence of deterministic nature are proved. (C) 2003 Elsevier B.V. All rights reserved.
引用
收藏
页码:165 / 176
页数:12
相关论文
共 7 条
[1]   Parameter convergence and learning curves for neural networks [J].
Fine, TL ;
Mukherjee, S .
NEURAL COMPUTATION, 1999, 11 (03) :747-769
[3]   CONVERGENCE OF LEARNING ALGORITHMS WITH CONSTANT LEARNING RATES [J].
KUAN, CM ;
HORNIK, K .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1991, 2 (05) :484-489
[4]  
Li Z., 2001, J MATH RES EXPOSITIO, V21, P219
[5]  
LI ZX, 2003, IN PRESS J MATH RES, V28
[6]   Deterministic convergence of an online gradient method for neural networks [J].
Wu, W ;
Xu, YS .
JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2002, 144 (1-2) :335-347
[7]   Training multilayer perceptrons via minimization of sum of ridge functions [J].
Wu, W ;
Feng, GR ;
Li, X .
ADVANCES IN COMPUTATIONAL MATHEMATICS, 2002, 17 (04) :331-347