Continual Learning, Fast and Slow

被引:6
作者
Pham, Quang [1 ]
Liu, Chenghao [2 ]
Hoi, Steven C. H. [2 ,3 ]
机构
[1] ASTAR, Inst Infocomm Res I2R, Singapore 138632, Singapore
[2] Salesforce Res Asia, Singapore 038985, Singapore
[3] Singapore Management Univ, Singapore 188065, Singapore
关键词
Continual learning; fast and slow learning; SYSTEMS;
D O I
10.1109/TPAMI.2023.3324203
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
According to the Complementary Learning Systems (CLS) theory (McClelland et al. 1995) in neuroscience, humans do effective continual learning through two complementary systems: a fast learning system centered on the hippocampus for rapid learning of the specifics, individual experiences; and a slow learning system located in the neocortex for the gradual acquisition of structured knowledge about the environment. Motivated by this theory, we propose DualNets (for Dual Networks), a general continual learning framework comprising a fast learning system for supervised learning of pattern-separated representation from specific tasks and a slow learning system for representation learning of task-agnostic general representation via Self-Supervised Learning (SSL). DualNets can seamlessly incorporate both representation types into a holistic framework to facilitate better continual learning in deep neural networks. Via extensive experiments, we demonstrate the promising results of DualNets on a wide range of continual learning protocols, ranging from the standard offline, task-aware setting to the challenging online, task-free scenario. Notably, on the CTrL (Veniat et al. 2020) benchmark that has unrelated tasks with vastly different visual images, DualNets can achieve competitive performance with existing state-of-the-art dynamic architecture strategies (Ostapenko et al. 2021). Furthermore, we conduct comprehensive ablation studies to validate DualNets efficacy, robustness, and scalability.
引用
收藏
页码:134 / 149
页数:16
相关论文
共 91 条
[11]   Domain Generalization by Solving Jigsaw Puzzles [J].
Carlucci, Fabio M. ;
D'Innocente, Antonio ;
Bucci, Silvia ;
Caputo, Barbara ;
Tommasi, Tatiana .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2224-2233
[12]  
Cha Hyuntak., 2021, P IEEE CVF INT C COM, P9516
[13]  
Chaudhry A., 2019, arXiv
[14]   Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence [J].
Chaudhry, Arslan ;
Dokania, Puneet K. ;
Ajanthan, Thalaiyasingam ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 :556-572
[15]  
Chaudhry Arslan., 2019, 7 INT C LEARN REPR
[16]  
Chen T, 2020, PR MACH LEARN RES, V119
[17]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753
[18]   Describing Textures in the Wild [J].
Cimpoi, Mircea ;
Maji, Subhransu ;
Kokkinos, Iasonas ;
Mohamed, Sammy ;
Vedaldi, Andrea .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :3606-3613
[19]  
DAutume C. de Masson, 2019, P ADV NEUR INF PROC
[20]   A Continual Learning Survey: Defying Forgetting in Classification Tasks [J].
De Lange, Matthias ;
Aljundi, Rahaf ;
Masana, Marc ;
Parisot, Sarah ;
Jia, Xu ;
Leonardis, Ales ;
Slabaugh, Greg ;
Tuytelaars, Tinne .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (07) :3366-3385