Interlocking of learning and orthonormalization in RRLSA

被引:11
作者
Möller, R [1 ]
机构
[1] Max Planck Inst Psychol Res, D-80799 Munich, Germany
关键词
principal component analysis; orthonormalization; gram-Schmidt method; recursive least squares;
D O I
10.1016/S0925-2312(02)00671-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In sequential principal component analyzers based on deflation of the input vector, deviations from orthogonality of the previous eigenvector estimates may entail a severe loss of orthogonality in the next stages. A combination of the learning method with subsequent Gram-Schmidt orthonormalization solves this problem, but increases the computational effort. For the "robust recursive least squares learning algorithm" we show how the effort may be reduced by a factor of up to two by interlocking learning and the Gram-Schmidt method. (C) 2002 Elsevier Science B.V. All rights reserved.
引用
收藏
页码:429 / 433
页数:5
相关论文
共 4 条
[1]   PRINCIPAL COMPONENT EXTRACTION USING RECURSIVE LEAST-SQUARES LEARNING [J].
BANNOUR, S ;
AZIMISADJADI, MR .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1995, 6 (02) :457-469
[2]  
Golub G.H., 2013, MATRIX COMPUTATIONS
[3]   Robust recursive least squares learning algorithm for principal component analysis [J].
Ouyang, S ;
Bao, Z ;
Liao, GS .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (01) :215-221
[4]   OPTIMAL UNSUPERVISED LEARNING IN A SINGLE-LAYER LINEAR FEEDFORWARD NEURAL NETWORK [J].
SANGER, TD .
NEURAL NETWORKS, 1989, 2 (06) :459-473