Online continual learning in image classification: An empirical survey

被引:271
作者
Mai, Zheda [1 ]
Li, Ruiwen [1 ]
Jeong, Jihwan [1 ]
Quispe, David [1 ]
Kim, Hyunwoo [2 ]
Sanner, Scott [1 ]
机构
[1] Univ Toronto, Dept Mech & Ind Engn, 5 Kings Coll Rd, Toronto, ON M5S 3G8, Canada
[2] LG AI Res, 128 Yeoui Daero, Seoul, South Korea
关键词
Incremental learning; Continual learning; Lifelong learning; Catastrophic forgetting; Online learning;
D O I
10.1016/j.neucom.2021.10.021
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Online continual learning for image classification studies the problem of learning to classify images from an online stream of data and tasks, where tasks may include new classes (class incremental) or data non-stationarity (domain incremental). One of the key challenges of continual learning is to avoid catastrophic forgetting (CF), i.e., forgetting old tasks in the presence of more recent tasks. Over the past few years, a large range of methods and tricks have been introduced to address the continual learning problem, but many have not been fairly and systematically compared under a variety of realistic and practical settings. To better understand the relative advantages of various approaches and the settings where they work best, this survey aims to (1) compare state-of-the-art methods such as Maximally Interfered Retrieval (MIR), iCARL, and GDumb (a very strong baseline) and determine which works best at different memory and data settings as well as better understand the key source of CF; (2) determine if the best online class incremental methods are also competitive in the domain incremental setting; and (3) evaluate the per-formance of 7 simple but effective tricks such as the "review" trick and the nearest class mean (NCM) classifier to assess their relative impact. Regarding (1), we observe that iCaRL remains competitive when the memory buffer is small; GDumb outperforms many recently proposed methods in medium-size data-sets and MIR performs the best in larger-scale datasets. For (2), we note that GDumb performs quite poorly while MIR - already competitive for (1) - is also strongly competitive in this very different (but important) continual learning setting. Overall, this allows us to conclude that MIR is overall a strong and versatile online continual learning method across a wide variety of settings. Finally for (3), we find that all tricks are beneficial, and when augmented with the "review" trick and NCM classifier, MIR pro-duces performance levels that bring online continual learning much closer to its ultimate goal of match -ing offline training. Our codes are available at https://github.com/RaptorMai/online-continual-learning. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:28 / 51
页数:24
相关论文
共 153 条
[1]   Conditional Channel Gated Networks for Task-Aware Continual Learning [J].
Abati, Davide ;
Tomczak, Jakub ;
Blankevoort, Tijmen ;
Calderara, Simone ;
Cucchiara, Rita ;
Bejnordi, Babak Ehteshami .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3930-3939
[2]  
Acharya Manoj, 2020, BMVC
[3]  
Ahn H., 2020, CORR
[4]  
Aljundi R, 2019, ADV NEUR IN, V32
[5]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[6]   Expert Gate: Lifelong Learning with a Network of Experts [J].
Aljundi, Rahaf ;
Chakravarty, Punarjay ;
Tuytelaars, Tinne .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :7120-7129
[7]  
Arora G, 2019, P77
[8]   IL2M: Class Incremental Learning With Dual Memory [J].
Belouadah, Eden ;
Popescu, Adrian .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :583-592
[9]  
Belouadah E, 2020, IEEE WINT CONF APPL, P1255, DOI [10.1109/WACV45572.2020.9093562, 10.1109/wacv45572.2020.9093562]
[10]  
Borsos Z., 2020, ARXIV PREPRINT ARXIV