A class of competitive learning models which avoids neuron underutilization problem

被引:0
作者
Choy, CST [1 ]
Siu, WC [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Elect & Informat Engn, Hong Kong, Peoples R China
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1998年 / 9卷 / 06期
关键词
multiplicatively biased competitive learning; neuron underutilization problem; vector quantization;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we study a qualitative property of a class of competitive learning (CL) models, which is called the multiplicatively biased competitive learning (MBCL) model, namely that it avoids neuron underutilization with probability one as time goes to infinity. In the MBCL, the competition among neurons is biased by a multiplicative term, while only one weight vector is updated per learning step. This is of practical interest since its instances have computational complexities among the lowest in existing CL models. In addition, in applications like classification, vector quantizer design and probability density function estimation, a necessary condition for optimal performance is to avoid neuron underutilization. Hence, it is possible to define instances of MBCL to achieve optimal performance in these applications.
引用
收藏
页码:1258 / 1269
页数:12
相关论文
共 30 条
[1]   COMPETITIVE LEARNING ALGORITHMS FOR VECTOR QUANTIZATION [J].
AHALT, SC ;
KRISHNAMURTHY, AK ;
CHEN, PK ;
MELTON, DE .
NEURAL NETWORKS, 1990, 3 (03) :277-290
[2]  
[Anonymous], Pattern Recognition With Fuzzy Objective Function Algorithms
[3]  
BUTLER D, 1996, P IEEE INT C AC SPEE, P3390
[4]   ART-2 - SELF-ORGANIZATION OF STABLE CATEGORY RECOGNITION CODES FOR ANALOG INPUT PATTERNS [J].
CARPENTER, GA ;
GROSSBERG, S .
APPLIED OPTICS, 1987, 26 (23) :4919-4930
[5]   AN ADAPTIVE CONSCIENTIOUS COMPETITIVE LEARNING ALGORITHM AND ITS APPLICATIONS [J].
CHEN, LH ;
CHANG, S .
PATTERN RECOGNITION, 1994, 27 (12) :1787-1813
[6]  
CHOY CST, 1997, P IEEE INT C AC SPEE, P3405
[7]  
CHUNG FL, 1994, NEURAL NETWORKS, V7, P539, DOI 10.1016/0893-6080(94)90111-2
[8]   A CONVERGENCE THEOREM FOR GROSSBERG LEARNING [J].
CLARK, DM ;
RAVISHANKAR, K .
NEURAL NETWORKS, 1990, 3 (01) :87-92
[9]  
DESIENO D, 1988, P IEEE INT C NEURAL, V1, P117
[10]  
GERSHO A, 1979, IEEE T INFORM THEORY, V25, P373, DOI 10.1109/TIT.1979.1056067