A constructive algorithm for feedforward neural networks with incremental training

被引:43
作者
Liu, DR [1 ]
Chang, TS
Zhang, Y
机构
[1] Univ Illinois, Dept Elect & Comp Engn, Chicago, IL 60607 USA
[2] Univ Calif Davis, Dept Elect & Comp Engn, Davis, CA 95616 USA
基金
美国国家科学基金会;
关键词
constructive algorithm; feedforward neural networks; incremental training; linear programming; quadratic programming;
D O I
10.1109/TCSI.2002.805733
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We develop, in this brief, a new constructive learning algorithm for feedforward neural networks. We employ an incremental training procedure where training patterns are learned one by one. Our algorithm starts with a single training pattern and a single hidden-layer neuron. During the course of neural network training, when the algorithm gets stuck in a local minimum, we will attempt to escape from the local minimum by using the weight scaling technique. It is only after several consecutive failed attempts in escaping from a local minimum that will we allow the network to grow by adding a hidden-layer neuron. At this stage, we employ an optimization procedure based on quadratic/linear programming to select initial weights for the newly added neuron. Our optimization procedure tends to make the network reach the error tolerance with no or little training after adding a hidden-layer neuron. Our simulation results indicate that the present constructive algorithm can obtain neural networks very close to minimal structures (with the least possible number of hidden-layer neurons) and that convergence (to a solution) in neural network training can be guaranteed. We tested our algorithm extensively using a widely used benchmark problem, i.e., the parity problem.
引用
收藏
页码:1876 / 1879
页数:4
相关论文
共 17 条
[1]   RECURSIVE DYNAMIC NODE CREATION IN MULTILAYER NEURAL NETWORKS [J].
AZIMISADJADI, MR ;
SHEEDVASH, S ;
TRUJILLO, FO .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (02) :242-256
[2]   A UNIVERSAL NEURAL NET WITH GUARANTEED CONVERGENCE TO ZERO SYSTEM ERROR [J].
CHANG, TS ;
ABDELGHAFFAR, KAS .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 1992, 40 (12) :3022-3031
[3]  
CHANG TS, 1996, INTELLIGENT ENG SYST, V6, P169
[4]  
Demuth H., 1998, NEURAL NETWORK TOOLB
[5]  
Frean M, 1990, NEURAL COMPUT, V2, P198
[6]   A modified back-propagation method to avoid false local minima [J].
Fukuoka, Y ;
Matsuki, H ;
Minamitani, H ;
Ishida, A .
NEURAL NETWORKS, 1998, 11 (06) :1059-1072
[7]   TRAINING FEEDFORWARD NETWORKS WITH THE MARQUARDT ALGORITHM [J].
HAGAN, MT ;
MENHAJ, MB .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (06) :989-993
[8]   BACK-PROPAGATION ALGORITHM WHICH VARIES THE NUMBER OF HIDDEN UNITS [J].
HIROSE, Y ;
YAMASHITA, K ;
HIJIYA, S .
NEURAL NETWORKS, 1991, 4 (01) :61-66
[9]   Constructive algorithms for structure learning in feedforward neural networks for regression problems [J].
Kwok, TY ;
Yeung, DY .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (03) :630-645
[10]   Design of neural networks for fast convergence and accuracy: Dynamics and control [J].
Maghami, PG ;
Sparks, DW .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (01) :113-123