Incremental learning of spatio-temporal patterns with model selection

被引:0
作者
Yamauchi, Koichiro [1 ]
Sato, Masayoshi [1 ]
机构
[1] Hokkaido Univ, Grad Sch Informat Sci & Technol, Kita Ku, 14 Jyou Nishi 9 Chyou, Chyou, Hokkaido, Japan
来源
ARTIFICIAL NEURAL NETWORKS - ICANN 2007, PT 1, PROCEEDINGS | 2007年 / 4668卷
关键词
incremental learning; spatio-temporal patterns; model selection; RBF;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper proposes a biologically inspired incremental learning method for spatio-temporal patterns based on our recently reported "Incremental learning through sleep (ILS)" method. This method alternately repeats two learning phases: awake and sleep. During the awake phase, the system learns new spatio-temporal patterns by rote, whereas in the sleep phase, it rehearses the recorded new memories interleaved with old memories. The rehearsal process is essential for reconstructing the internal representation of the neural network so as not only to memorize the new patterns while keeping old memories but also to reduce redundant hidden units. By using this strategy, the neural network achieves high generalization ability. The most attractive property of the method is the incremental learning ability of non-independent distributed samples without catastrophic forgetting despite using a small amount of resources. We applied our method to an experiment on robot control signals, which vary depending on the context of the current situation.
引用
收藏
页码:149 / +
页数:2
相关论文
共 13 条
[1]   NEW LOOK AT STATISTICAL-MODEL IDENTIFICATION [J].
AKAIKE, H .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1974, AC19 (06) :716-723
[2]  
[Anonymous], NEURAL COMPUTATION
[3]   Evolving fuzzy neural networks for supervised/unsupervised online knowledge-based learning [J].
Kasabov, N .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2001, 31 (06) :902-918
[4]   Accurate on-line support vector regression [J].
Ma, JS ;
Theiler, J ;
Perkins, S .
NEURAL COMPUTATION, 2003, 15 (11) :2683-2703
[5]   Fast Learning in Networks of Locally-Tuned Processing Units [J].
Moody, John ;
Darken, Christian J. .
NEURAL COMPUTATION, 1989, 1 (02) :281-294
[6]   NETWORK INFORMATION CRITERION - DETERMINING THE NUMBER OF HIDDEN UNITS FOR AN ARTIFICIAL NEURAL-NETWORK MODEL [J].
MURATA, N ;
YOSHIZAWA, S ;
AMARI, S .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (06) :865-872
[7]   NETWORKS FOR APPROXIMATION AND LEARNING [J].
POGGIO, T ;
GIROSI, F .
PROCEEDINGS OF THE IEEE, 1990, 78 (09) :1481-1497
[8]   Constructive incremental learning from only local information [J].
Schaal, S ;
Atkeson, CG .
NEURAL COMPUTATION, 1998, 10 (08) :2047-2084
[9]   A GENERAL REGRESSION NEURAL NETWORK [J].
SPECHT, DF .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1991, 2 (06) :568-576
[10]   A Modified General Regression Neural Network (MGRNN) with new, efficient training algorithms as a robust 'black box'-tool for data analysis [J].
Tomandl, D ;
Schober, A .
NEURAL NETWORKS, 2001, 14 (08) :1023-1034