Online Continual Learning Under Extreme Memory Constraints

被引:43
作者
Fini, Enrico [1 ,2 ]
Lathuiliere, Stephane [3 ]
Sangineto, Enver [1 ]
Nabi, Moin [2 ]
Ricci, Elisa [1 ,4 ]
机构
[1] Univ Trento, Trento, Italy
[2] SAP AI Res, Berlin, Germany
[3] Inst Polytech Paris, Telecom Paris, LTCI, Palaiseau, France
[4] Fdn Bruno Kessler, Trento, Italy
来源
COMPUTER VISION - ECCV 2020, PT XXVIII | 2020年 / 12373卷
基金
欧盟地平线“2020”;
关键词
Continual Learning; Online learning; Memory efficient;
D O I
10.1007/978-3-030-58604-1_43
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual Learning (CL) aims to develop agents emulating the human ability to sequentially learn new tasks while being able to retain knowledge obtained from past experiences. In this paper, we introduce the novel problem of Memory-Constrained Online Continual Learning (MC-OCL) which imposes strict constraints on the memory overhead that a possible algorithm can use to avoid catastrophic forgetting. As most, if not all, previous CL methods violate these constraints, we propose an algorithmic solution to MC-OCL: Batch-level Distillation (BLD), a regularization-based CL approach, which effectively balances stability and plasticity in order to learn from data streams, while preserving the ability to solve old tasks through distillation. Our extensive experimental evaluation, conducted on three publicly available benchmarks, empirically demonstrates that our approach successfully addresses the MC-OCL problem and achieves comparable accuracy to prior distillation methods requiring higher memory overhead (Code available at https://github.com/DonkeyShot21/batch-level-distillation).
引用
收藏
页码:720 / 735
页数:16
相关论文
共 31 条
[1]  
Rusu AA, 2016, Arxiv, DOI [arXiv:1606.04671, DOI 10.48550/ARXIV.1606.04671, DOI 10.43550/ARXIV:1606.04671]
[2]  
Aljundi R., 2019, ADV NEURAL INFORM PR, V32, P11849, DOI DOI 10.1109/CVPR.2019.01151
[3]  
Aljundi R, 2019, ADV NEUR IN, V32
[4]   Task-Free Continual Learning [J].
Aljundi, Rahaf ;
Kelchtermans, Klaas ;
Tuytelaars, Tinne .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11246-11255
[5]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[6]  
Boski M, 2017, 2017 10TH INTERNATIONAL WORKSHOP ON MULTIDIMENSIONAL (ND) SYSTEMS (NDS)
[7]   End-to-End Incremental Learning [J].
Castro, Francisco M. ;
Marin-Jimenez, Manuel J. ;
Guil, Nicolas ;
Schmid, Cordelia ;
Alahari, Karteek .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :241-257
[8]   Modeling the Background for Incremental Learning in Semantic Segmentation [J].
Cermelli, Fabio ;
Mancini, Massimiliano ;
Bulo, Samuel Rota ;
Ricci, Elisa ;
Caputo, Barbara .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :9230-9239
[9]   Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence [J].
Chaudhry, Arslan ;
Dokania, Puneet K. ;
Ajanthan, Thalaiyasingam ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 :556-572
[10]  
De Lange M, 2021, Arxiv, DOI arXiv:1909.08383