Adiabatic replay for continual learning

被引:0
作者
Krawczyk, Alexander [1 ]
Gepperth, Alexander [1 ]
机构
[1] Univ Appl Sci Fulda, Dept Appl Comp Sci, Leipziger Str 123, D-36037 Fulda, Germany
来源
2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024 | 2024年
关键词
catastrophic forgetting; continual learning; classincremental learning; generative replay; selective replay; selective updating; adiabatic replay;
D O I
10.1109/IJCNN60899.2024.10651381
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To avoid catastrophic forgetting, many replay-based approaches to continual learning (CL) require, for each learning phase with new data, the replay of samples representing all of the previously learned knowledge. Since this knowledge grows over time, such approaches invest linearly growing computational resources just for re-learning what is already known. In this proof-of-concept study, we propose a generative replay-based CL strategy that we term adiabatic replay (AR), which achieves CL in constant time and memory complexity by making use of the (very common) situation where each new learning phase is adiabatic, i.e., represents only a small addition to existing knowledge. The employed Gaussian Mixture Models (GMMs) are capable of selective updating only those parts of their internal representation affected by the new task. The information that would otherwise be overwritten by such updates is protected by selective replay of samples that are similar to newly arriving ones. Thus, the amount of to-be-replayed samples depends not at all on accumulated, but only on added knowledge, which is small by construction. Based on the challenging CIFAR and SVHN datasets in combination with pre-trained feature extractors, we confirm AR's superior scaling behavior while showing better accuracy than common baselines in the field.
引用
收藏
页数:10
相关论文
共 60 条
[1]  
Aljundi R., 2019, ARXIV
[2]  
Aljundi R., 2019, Neural Information Processing Systems
[3]   Pseudo-rehearsal: Achieving deep reinforcement learning without catastrophic forgetting [J].
Atkinson, Craig ;
McCane, Brendan ;
Szymanski, Lech ;
Robins, Anthony .
NEUROCOMPUTING, 2021, 428 :291-307
[4]  
Bagus B., 2022, ARXIV
[5]  
Bagus Benedikt, 2021, 2021 INT JOINT C NEU, P1
[6]  
Caron M, 2020, ADV NEUR IN, V33
[7]  
Caselles-Dupre H., 2021, 2021 INT JOINT C NEU, P1
[8]  
Chaudhry A., 2019, ARXIV
[9]  
Chaudhry Arslan, 2018, arXiv
[10]  
Chen T., 2020, INT C MACH LEARN PML, P1597