STOCHASTIC COMPETITIVE LEARNING

被引:49
作者
KOSKO, B
机构
[1] Department of Electrical Engineering, University of Southern California, Los Angeles
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 1991年 / 2卷 / 05期
关键词
D O I
10.1109/72.134289
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We examine competitive learning systems as stochastic dynamical systems. This includes continuous and discrete formulations of unsupervised, supervised, and differential competitive learning systems. These systems estimate an unknown probability density function from random pattern samples and behave as adaptive vector quantizers. Synaptic vectors, in feedforward competitive neural networks, quantize the pattern space and converge to pattern class centroids or local probability maxima. A stochastic Lyapunov argument shows that competitive synaptic vectors converge to centroids exponentially quickly and reduces competitive learning to stochastic gradient descent. Convergence does not depend on a specific dynamical model of how neuronal activations change. These results extend to competitive estimation of local covariances and higher-order statistics.
引用
收藏
页码:522 / 529
页数:8
相关论文
共 26 条
  • [1] Chung K. L., 1974, COURSE PROBABILITY T
  • [2] DESIENO D, 1988, 2ND P ANN IEEE INT C, V1, P117
  • [3] FISHER WD, 1953, ECONOMETRICA, P567
  • [4] Fleming W., 1975, DETERMINISTIC STOCHA
  • [5] GLUCK MA, 1988, PSYCHOBIOLOGY, V16, P298
  • [6] Grossberg S., 1969, Journal of Statistical Physics, V1, P319, DOI 10.1007/BF01007484
  • [7] HEBB DO, 1949, ORG BEHAVIOR
  • [8] HECTNIELSEN R, 1987, APPL OPTICS, V26, P4979
  • [9] Hirsch MW., 1974, DIFFERENTIAL EQUATIO
  • [10] Klopf A. H., 1986, AIP Conference Proceedings, P265, DOI 10.1063/1.36278