A non-convergent on-line training algorithm for neural networks

被引:0
|
作者
Utans, J [1 ]
机构
[1] London Business Sch, London NW1 4SA, England
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stopped training is a method to avoid over-fitting of neural network models by preventing an iterative optimization method from reaching a local minimum of the objective function. It is motivated by the observation that over-fitting occurs gradually as training progresses. The stopping time is typically determined by monitoring the expected generalization performance of the model as approximated by the error on a validation set. In this paper we propose to use an analytic estimate for this purpose. However, these estimates require knowledge of the analytic form of the objective function used for training the network and are only applicable when the weights correspond to a local minimum of this objective function. For this reason, we propose the use of an auxiliary, regularized objective function. The algorithm is "self-contained" and does not require to split the data in a training and a separate validation set.
引用
收藏
页码:913 / 921
页数:9
相关论文
共 50 条
  • [1] Non-Convergent Truth
    Schaubroeck, Katrien
    ETHICAL PERSPECTIVES, 2010, 17 (04) : 652 - 656
  • [2] Convergent on-line algorithms for supervised learning in neural networks
    Grippo, L
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (06): : 1284 - 1299
  • [3] ALGORITHMS FOR NON-CONVERGENT SEQUENCES
    DELAHAYE, JP
    NUMERISCHE MATHEMATIK, 1980, 34 (03) : 333 - 347
  • [4] A training data selection in on-line training for multilayer neural networks
    Hara, K
    Nakayama, K
    Kharaf, AAM
    IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE, 1998, : 2247 - 2252
  • [5] On-line training of neural networks: A sliding window approach for the Levenberg-Marquardt algorithm
    Dias, FM
    Antunes, A
    Vieira, J
    Mota, AM
    ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING APPLICATIONS: A BIOINSPIRED APPROACH, PT 2, PROCEEDINGS, 2005, 3562 : 577 - 585
  • [6] A novel method for on-line training of dynamic neural networks
    Chowdhury, FN
    PROCEEDINGS OF THE 2001 IEEE INTERNATIONAL CONFERENCE ON CONTROL APPLICATIONS (CCA'01), 2001, : 161 - 166
  • [7] Adaptive stepsize algorithms for on-line training of neural networks
    Magoulas, GD
    Plagianakos, VP
    Vrahatis, MN
    NONLINEAR ANALYSIS-THEORY METHODS & APPLICATIONS, 2001, 47 (05) : 3425 - 3430
  • [8] Baker domains and non-convergent deformations
    Robles, Rodrigo
    Sienra, Guillermo
    JOURNAL OF FRACTAL GEOMETRY, 2022, 9 (1-2) : 1 - 22
  • [9] On-line training of recurrent neural networks with continuous topology adaptation
    Obradovic, D
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1996, 7 (01): : 222 - 228
  • [10] Multilayer neural networks:: an experimental evaluation of on-line training methods
    Martí, R
    El-Fallahi, A
    COMPUTERS & OPERATIONS RESEARCH, 2004, 31 (09) : 1491 - 1513