Single-iteration training algorithm for multi-layer feed-forward neural networks

被引:7
作者
Barhen, J
Cogswell, R
Protopopescu, V
机构
[1] Oak Ridge Natl Lab, Ctr Engn Sci Adv Res, Oak Ridge, TN 37831 USA
[2] Monmouth Coll, Dept Math & Comp Sci, Monmouth, IL 61462 USA
关键词
virtual input layer; neural network training; fast learning; SVD;
D O I
10.1023/A:1009682730770
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A new methodology for neural learning is presented. Only a single iteration is needed to train a feed-forward network with near-optimal results. This is achieved by introducing a key modification to the conventional multi-layer architecture. A virtual input layer is implemented, which is connected to the nominal input layer by a special nonlinear transfer function, and to the first hidden layer by regular (linear) synapses. A sequence of alternating direction singular value decompositions is then used to determine precisely the inter-layer synaptic weights. This computational paradigm exploits the known separability of the linear tinter-layer propagation) and nonlinear (neuron activation) aspects of information transfer within a neural network. Examples show that the trained neural networks generalize well.
引用
收藏
页码:113 / 129
页数:17
相关论文
共 16 条
[11]  
MARS P, 1996, LEARNING ALGORITHMS
[12]  
SHEPHERD AJ, 1997, 2ND ORDER METHODS NE
[13]  
TAM YF, 1995, NEURAL PROCESS LETT, V2, P20
[14]  
Toomarian N., 1995, U.S. Patent, Patent No. [5,428,710, 5428710]
[15]  
TOOMARIAN NB, 1992, NEURAL NETWORKS, V5, P473, DOI 10.1016/0893-6080(92)90009-8
[16]  
WHITE D, 1992, HDB INTELLIGENT CONT