Training the random neural network using quasi-Newton methods

被引:44
作者
Likas, A [1 ]
Stafylopatis, A
机构
[1] Univ Ioannina, Dept Comp Sci, GR-45110 Ioannina, Greece
[2] Natl Tech Univ Athens, Dept Elect & Comp Engn, GR-15773 Zografos, Greece
关键词
D O I
10.1016/S0377-2217(99)00482-8
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
Training in the random neural network (RNN) is generally specified as the minimization of an appropriate error function with respect to the parameters of the network (weights corresponding to positive and negative connections). We propose here a technique for error minimization that is based on the use of quasi-Newton optimization techniques. Such techniques offer more sophisticated exploitation of the gradient information compared to simple gradient descent methods, but are computationally more expensive and difficult to implement. In this work we specify the necessary details for the application of quasi-Newton methods to the training of the RNN, and provide comparative experimental results from the use of these methods to some well-known test problems, which confirm the superiority of the approach. (C) 2000 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:331 / 339
页数:9
相关论文
共 22 条