Convergence of gradient method with penalty for Ridge Polynomial neural network

被引:17
作者
Yu, Xin [1 ]
Chen, Qingfeng [1 ]
机构
[1] Guangxi Univ, Sch Comp Elect & Informat, Nanning 53004, Peoples R China
关键词
Ridge Polynomial neural network; Gradient algorithm; Monotonicity; Convergence; NONSTATIONARY;
D O I
10.1016/j.neucom.2012.05.022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, a penalty term is added to the conventional error function to improve the generalization of the Ridge Polynomial neural network. In order to choose appropriate learning parameters, we propose a monotonicity theorem and two convergence theorems including a weak convergence and a strong convergence for the synchronous gradient method with penalty for the neural network. The experimental results of the function approximation problem illustrate the above theoretical results are valid. (C) 2012 Elsevier B.V. All rights reserved.
引用
收藏
页码:405 / 409
页数:5
相关论文
共 14 条
[11]  
Yuan Y., 1997, OPTIMIZATION THEORY
[12]   Convergence of BP algorithm for product unit neural networks with exponential weights [J].
Zhang, C. ;
Wu, W. ;
Chen, X. H. ;
Xiong, Y. .
NEUROCOMPUTING, 2008, 72 (1-3) :513-520
[13]   Boundedness and Convergence of Online Gradient Method With Penalty for Feedforward Neural Networks [J].
Zhang, Huisheng ;
Wu, Wei ;
Liu, Fei ;
Yao, Mingchen .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (06) :1050-1054
[14]   Boundedness and Convergence of Online Gradient Method with Penalty for Linear Output Feedforward Neural Networks [J].
Zhang, Huisheng ;
Wu, Wei .
NEURAL PROCESSING LETTERS, 2009, 29 (03) :205-212