Example forgetting and rehearsal in continual learning

被引:1
作者
Benko, Beatrix [1 ,2 ]
机构
[1] Eotvos Lorand Univ, Pazmany Peter Setany 1-C, H-1117 Budapest, Hungary
[2] Alfred Reny Inst Math, Realtanoda Utca 13-15, H-1053 Budapest, Hungary
关键词
Continual learning; Catastrophic forgetting; Rehearsal exemplar selection;
D O I
10.1016/j.patrec.2024.01.021
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A major challenge of training neural networks on different tasks in a sequential manner is catastrophic forgetting, where earlier experiences are forgotten while learning a new one. In recent years, rehearsal -based methods have become popular top -performing alleviation approaches. Rehearsal builds upon maintaining and repeatedly using for training a small buffer of data selected across encountered tasks. In this work, we examine in image classification whether all training examples are forgotten equally and which ones are worth keeping in the memory. Two different statistics of forgettableness are employed to rank examples based on them. We propose a simple strategy for example selection: keeping the least forgettable examples according to precomputed or continually updated forgetting statistics. Despite the simplicity of this method, it achieves better results compared to different memory -management strategies on standard benchmarks.
引用
收藏
页码:65 / 72
页数:8
相关论文
共 30 条
[1]  
Aljundi R, 2019, ADV NEUR IN, V32
[2]  
[Anonymous], 2009, LEARNING MULTIPLE LA
[3]  
Buzzega Pietro, 2020, NeurIPS, P15920
[4]  
Chaudhry A., 2019, arXiv, DOI 10.48550/arXiv.1902.10486
[5]  
Chrysakis A, 2020, PR MACH LEARN RES, V119
[6]   ConViT: improving vision transformers with soft convolutional inductive biases [J].
d'Ascoli, Stephane ;
Touvron, Hugo ;
Leavitt, Matthew L. ;
Morcos, Ari S. ;
Biroli, Giulio ;
Sagun, Levent .
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2022, 2022 (11)
[7]   DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion [J].
Douillard, Arthur ;
Rame, Alexandre ;
Couairon, Guillaume ;
Cord, Matthieu .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :9275-9285
[8]   Embracing Change: Continual Learning in Deep Neural Networks [J].
Hadsell, Raia ;
Rao, Dushyant ;
Rusu, Andrei A. ;
Pascanu, Razvan .
TRENDS IN COGNITIVE SCIENCES, 2020, 24 (12) :1028-1040
[9]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[10]  
Hurtado J, 2023, Arxiv, DOI arXiv:2207.01145