Unsupervised Replay Strategies for Continual Learning with Limited Data

被引:0
作者
Bazhenov, Anthony [1 ]
Dewasurendra, Pahan [1 ]
Krishnan, Giri P. [2 ]
Delanois, Jean Erik [3 ]
机构
[1] Del Norte High Sch, San Diego, CA 92127 USA
[2] Univ Calif San Diego, Dept Med, La Jolla, CA USA
[3] Univ Calif San Diego, Dept Comp Sci & Engn, La Jolla, CA USA
来源
2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024 | 2024年
关键词
Neural Networks; limited training data; enhance memory; sleep; continual learning; unsupervised replay; MEMORY REPLAY; SLEEP; REACTIVATION; CONSOLIDATION; HIPPOCAMPUS; SYSTEMS;
D O I
10.1109/IJCNN60899.2024.10650116
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial neural networks (ANNs) show limited performance with scarce or imbalanced training data and face challenges with continuous learning, such as forgetting previously learned data after new tasks training. In contrast, the human brain can learn continuously and from just a few examples. This research explores the impact of 'sleep' - an unsupervised phase incorporating stochastic network activation with local Hebbian learning rules - on ANNs trained incrementally with limited and imbalanced datasets, specifically MNIST and Fashion MNIST. We discovered that introducing a sleep phase significantly enhanced accuracy in models trained with limited data. When a few tasks were trained sequentially, sleep replay not only rescued previously learned information that had been forgotten following new task training but also often enhanced performance in prior tasks, especially those trained with limited data. This study highlights the multifaceted role of sleep replay in augmenting learning efficiency and facilitating continual learning in ANNs.
引用
收藏
页数:10
相关论文
共 51 条
[1]  
[Anonymous], 2015, IEEE IJCNN
[2]  
Arora S., 2019, arXiv
[3]  
Bazhenov A., 2024, P AAAI C ART INT
[4]  
Buzzega P, 2020, Advances in neural information processing systems, V33, P15920
[5]   Rethinking Experience Replay: Bag of Tricks for Continual Learning [J].
Buzzega, Pietro ;
Boschini, Matteo ;
Porrello, Angelo ;
Calderara, Simone .
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, :2180-2187
[6]  
Delanois J. E., 2023, 2023 22 IEEE INT C M
[7]   Catastrophic forgetting in connectionist networks [J].
French, RM .
TRENDS IN COGNITIVE SCIENCES, 1999, 3 (04) :128-135
[8]   Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation [J].
Golden, Ryan ;
Delanois, Jean Erik ;
Sanda, Pavel ;
Bazhenov, Maxim .
PLOS COMPUTATIONAL BIOLOGY, 2022, 18 (11)
[9]   Can sleep protect memories from catastrophic forgetting? [J].
Gonzalez, Oscar C. ;
Sokolov, Yury ;
Krishnan, Giri P. ;
Delanois, Jean Erik ;
Bazhenov, Maxim .
ELIFE, 2020, 9
[10]   Replay in Deep Learning: Current Approaches and Missing Biological Elements [J].
Hayes, Tyler L. ;
Krishnan, Giri P. ;
Bazhenov, Maxim ;
Siegelmann, Hava T. ;
Sejnowski, Terrence J. ;
Kanan, Christopher .
NEURAL COMPUTATION, 2021, 33 (11) :2908-2950