Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting

被引:27
作者
Ans, B
Rousset, S
French, RM
Musca, S
机构
[1] Univ Grenoble 2, CNRS, UMR 5105, F-38040 Grenoble 09, France
[2] Univ Liege, B-4000 Liege, Belgium
关键词
artificial neural networks; temporal sequence learning; catastrophic forgetting; self-refreshing memory;
D O I
10.1080/09540090412331271199
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While humans forget gradually, highly distributed connectionist networks forget catastrophically: newly learned information often completely erases previously learned information. This is not just implausible cognitively, but disastrous practically. However, it is not easy in connectionist cognitive modelling to keep away from highly distributed neural networks, if only because of their ability to generalize. A realistic and effective system that solves the problem of catastrophic interference in sequential learning of 'static' (i.e. non-temporally ordered) patterns has been proposed recently (Robins 1995, Connection Science, 7: 123-146, 1996, Connection Science, 8: 259-275, Ans and Rousset 1997, CR Academie des Sciences Paris, L l Sciences, 320: 989-997, French 1997, Connection Science, 9: 353-379, 1999, Trends in Cognitive Sciences, 3: 128-135, Ans and Rousset 2000, Connection Science, 12: 1 - 19). The basic principle is to learn new external patterns interleaved with internally generated 'pseudopatterns' (generated from random activation) that reflect the previously learned information. However, to be credible, this self-refreshing mechanism for static learning has to encompass our human ability to learn serially many temporal sequences of patterns without catastrophic forgetting. Temporal sequence learning is arguably more important than static pattern learning in the real world. In this paper, we develop a dual-network architecture in which self-generated pseudopatterns reflect (non-temporally) all the sequences of temporally ordered items previously learned. Using these pseudopatterns, several self-refreshing mechanisms that eliminate catastrophic forgetting in sequence learning are described and their efficiency is demonstrated through simulations. Finally, an experiment is presented that evidences a close similarity between human and simulated behaviour.
引用
收藏
页码:71 / 99
页数:29
相关论文
共 60 条
[1]  
Altmann G. T. M., 1999, SCIENCE, V284, P875, DOI [10.1126/science.284.5416.875a, DOI 10.1126/SCIENCE.284.5416.875A]
[2]   Learning and development in neural networks - the importance of prior experience [J].
Altmann, GTM .
COGNITION, 2002, 85 (02) :B43-B50
[3]  
[Anonymous], 1995, CONNECT SCI, DOI DOI 10.1080/09540099550039264
[4]  
[Anonymous], 1986, SERIAL ORDER PARALLE
[5]  
[Anonymous], P 24 ANN M COGN SCI
[6]   Avoiding catastrophic forgetting by coupling two reverberating neural networks [J].
Ans, B ;
Rousset, S .
COMPTES RENDUS DE L ACADEMIE DES SCIENCES SERIE III-SCIENCES DE LA VIE-LIFE SCIENCES, 1997, 320 (12) :989-997
[7]   A NEURAL-NETWORK MODEL FOR TEMPORAL SEQUENCE LEARNING AND MOTOR PROGRAMMING [J].
ANS, B ;
COITON, Y ;
GILHODES, JC ;
VELAY, JL .
NEURAL NETWORKS, 1994, 7 (09) :1461-1476
[8]   Neural networks with a self-refreshing memory: knowledge transfer in sequential learning tasks without catastrophic forgetting [J].
Ans, B ;
Rousset, S .
CONNECTION SCIENCE, 2000, 12 (01) :1-19
[9]  
ANS B, 1990, CR ACAD SCI III-VIE, V311, P7
[10]   A connectionist multiple-trace memory model for polysyllabic word reading [J].
Ans, B ;
Carbonnel, S ;
Valdois, S .
PSYCHOLOGICAL REVIEW, 1998, 105 (04) :678-723