A constructive algorithm for feedforward neural networks with incremental training

被引:42
|
作者
Liu, DR [1 ]
Chang, TS
Zhang, Y
机构
[1] Univ Illinois, Dept Elect & Comp Engn, Chicago, IL 60607 USA
[2] Univ Calif Davis, Dept Elect & Comp Engn, Davis, CA 95616 USA
基金
美国国家科学基金会;
关键词
constructive algorithm; feedforward neural networks; incremental training; linear programming; quadratic programming;
D O I
10.1109/TCSI.2002.805733
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We develop, in this brief, a new constructive learning algorithm for feedforward neural networks. We employ an incremental training procedure where training patterns are learned one by one. Our algorithm starts with a single training pattern and a single hidden-layer neuron. During the course of neural network training, when the algorithm gets stuck in a local minimum, we will attempt to escape from the local minimum by using the weight scaling technique. It is only after several consecutive failed attempts in escaping from a local minimum that will we allow the network to grow by adding a hidden-layer neuron. At this stage, we employ an optimization procedure based on quadratic/linear programming to select initial weights for the newly added neuron. Our optimization procedure tends to make the network reach the error tolerance with no or little training after adding a hidden-layer neuron. Our simulation results indicate that the present constructive algorithm can obtain neural networks very close to minimal structures (with the least possible number of hidden-layer neurons) and that convergence (to a solution) in neural network training can be guaranteed. We tested our algorithm extensively using a widely used benchmark problem, i.e., the parity problem.
引用
收藏
页码:1876 / 1879
页数:4
相关论文
共 50 条
  • [21] A FAST AND ROBUST LEARNING ALGORITHM FOR FEEDFORWARD NEURAL NETWORKS
    WEYMAERE, N
    MARTENS, JP
    NEURAL NETWORKS, 1991, 4 (03) : 361 - 369
  • [22] A general backpropagation algorithm for feedforward neural networks learning
    Yu, XH
    Efe, MO
    Kaynak, O
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (01): : 251 - 254
  • [23] Using spotted hyena optimizer for training feedforward neural networks
    Luo, Qifang
    Li, Jie
    Zhou, Yongquan
    Liao, Ling
    COGNITIVE SYSTEMS RESEARCH, 2021, 65 (65): : 1 - 16
  • [24] Training feedforward neural networks with Bayesian hyper-heuristics
    Schreuder, A. N.
    Bosman, A. S.
    Engelbrecht, A. P.
    Cleghorn, C. W.
    INFORMATION SCIENCES, 2025, 686
  • [25] Using Spotted Hyena Optimizer for Training Feedforward Neural Networks
    Li, Jie
    Luo, Qifang
    Liao, Ling
    Zhou, Yongquan
    INTELLIGENT COMPUTING METHODOLOGIES, ICIC 2018, PT III, 2018, 10956 : 828 - 833
  • [26] Regularizing the effect of input noise injection in feedforward neural networks training
    Abd-Krim Seghouane
    Yassir Moudden
    Gilles Fleury
    Neural Computing & Applications, 2004, 13 : 248 - 254
  • [27] Regularizing the effect of input noise injection in feedforward neural networks training
    Seghouane, AK
    Moudden, Y
    Fleury, G
    NEURAL COMPUTING & APPLICATIONS, 2004, 13 (03): : 248 - 254
  • [28] A new optimization algorithm for single hidden layer feedforward neural networks
    Li, Leong Kwan
    Shao, Sally
    Yiu, Ka-Fai Cedric
    APPLIED SOFT COMPUTING, 2013, 13 (05) : 2857 - 2862
  • [29] Optimization of feedforward neural networks
    Han, J
    Moraga, C
    Sinne, S
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 1996, 9 (02) : 109 - 119
  • [30] A recursive algorithm based on the extended Kalman filter for the training of feedforward neural models
    Rivals, I
    Personnaz, L
    NEUROCOMPUTING, 1998, 20 (1-3) : 279 - 294