Finite-sample convergence properties of the LVQ1 algorithm and the batch LVQ1 algorithm

被引:4
作者
Bermejo, S [1 ]
Cabestany, J [1 ]
机构
[1] Univ Politecn Catalunya, Dept Elect Engn, ES-08034 Barcelona, Spain
关键词
LVQ1; algorithm; asymptotic convergence; online gradient descent; finite-sample properties; BLVQ1; Newton optimisation;
D O I
10.1023/A:1011328322315
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This letter addresses the asymptotic convergence of Kohonen's LVQ1 algorithm when the number of training samples are finite with an analysis that uses the dynamical systems and optimisation theories. It establishes the sufficient conditions to ensure the convergence of LVQ1 near a minimum of its cost function for constant step sizes and cyclic sampling. It also proposes a batch version of LVQ1 based on the very fast Newton optimisation method that cancels the dependence of the on-line version on the order of supplied training samples.
引用
收藏
页码:135 / 157
页数:23
相关论文
共 12 条
[1]  
Benveniste A, 1990, Adaptive algorithms and stochastic approximations
[2]  
Bishop C. M., 1995, NEURAL NETWORKS PATT
[3]  
BOTTOU L., 1998, Online Algorithms and Stochastic Approximations
[4]  
DENNIS JE, 1989, OPTIMIZATION
[5]  
Devaney R, 1987, An introduction to chaotic dynamical systems, DOI 10.2307/3619398
[6]  
Devroye L., 1996, A probabilistic theory of pattern recognition
[7]  
Hestenes M.R., 1980, CONJUGATE DIRECTION
[8]  
Hiriart-Urruty J. B., 1996, CONVEX ANAL MINIMIZA, V305
[9]  
KOHONEN T, 1996, SELF ORG MAPS
[10]  
KOHONEN T, 1995, LVQPAK LEARNING VECT