Convergence of a Gradient Algorithm with Penalty for Training Two-layer Neural Networks

被引:6
作者
Shao, Hongmei [1 ]
Liu, Lijun [2 ]
Zheng, Gaofeng [3 ]
机构
[1] China Univ Petr, Coll Math & Comput Sci, Dongying 257061, Peoples R China
[2] Dalian Nationalities Univ, Dept Math, Dalian 116605, Peoples R China
[3] JANA Solut Inc, Tokyo 1050014, Japan
来源
2009 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND INFORMATION TECHNOLOGY, VOL 4 | 2009年
关键词
D O I
10.1109/ICCSIT.2009.5234616
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper a squared penalty term is added to the conventional error function to improve the generalization of neural networks. A weight boundedness theorem and two convergence theorems are proved for the gradient learning algorithm with penalty when it is used for training a two-layer feedforward neural network. To illustrate above theoretical findings, numerical experiments are conducted based on a linearly separable problem and simulation results are presented. The abstract goes here.
引用
收藏
页码:76 / +
页数:2
相关论文
共 14 条
[1]  
[Anonymous], 1970, ITERATIVE SOLUTION N
[2]  
[Anonymous], 2001, NEURAL NETWORKS COMP
[3]   CONNECTIONIST LEARNING PROCEDURES [J].
HINTON, GE .
ARTIFICIAL INTELLIGENCE, 1989, 40 (1-3) :185-234
[4]   Structural learning with forgetting [J].
Ishikawa, M .
NEURAL NETWORKS, 1996, 9 (03) :509-521
[5]  
LOONEY CG, 1992, IEEE T KNOWL DATA EN, V8, P465
[6]   Improving neural network training solutions using regularisation [J].
Mc Loone, S ;
Irwin, G .
NEUROCOMPUTING, 2001, 37 :71-90
[7]   PRUNING ALGORITHMS - A SURVEY [J].
REED, R .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (05) :740-747
[8]   A penalty-function approach for pruning feedforward neural networks [J].
Setiono, R .
NEURAL COMPUTATION, 1997, 9 (01) :185-204
[9]  
Shao H, 2007, J INFORM COMPUTATION, V4, P251
[10]  
WEIGEND AS, 1991, IEEE IJCNN, P2374, DOI 10.1109/IJCNN.1991.170743