Training in the random neural network (RNN) is generally specified as the minimization of an appropriate error function with respect to the parameters of the network (weights corresponding to positive and negative connections). We propose here a technique for error minimization that is based on the use of quasi-Newton optimization techniques. Such techniques offer more sophisticated exploitation of the gradient information compared to simple gradient descent methods, but are computationally more expensive and difficult to implement. In this work we specify the necessary details for the application of quasi-Newton methods to the training of the RNN, and provide comparative experimental results from the use of these methods to some well-known test problems, which confirm the superiority of the approach. (C) 2000 Elsevier Science B.V. All rights reserved.
机构:
Univ Rene Descartes Paris V, Ecole Hautes Etud Informat, 45 Rue St Peres, F-75006 Paris, FranceUniv Rene Descartes Paris V, Ecole Hautes Etud Informat, 45 Rue St Peres, F-75006 Paris, France
机构:
Univ Rene Descartes Paris V, Ecole Hautes Etud Informat, 45 Rue St Peres, F-75006 Paris, FranceUniv Rene Descartes Paris V, Ecole Hautes Etud Informat, 45 Rue St Peres, F-75006 Paris, France