Class-Incremental Learning: A Survey

被引:33
作者
Zhou, Da-Wei [1 ]
Wang, Qi-Wei [1 ]
Qi, Zhi-Hong [1 ]
Ye, Han-Jia [1 ]
Zhan, De-Chuan [1 ]
Liu, Ziwei [2 ]
机构
[1] Nanjing Univ, Sch Artificial Intelligence, Natl Key Lab Novel Software Technol, Nanjing 210023, Peoples R China
[2] Nanyang Technol Univ, Coll Comp & Data Sci, S Lab, Singapore 639798, Singapore
关键词
Catastrophic forgetting; class-incremental learning; continual learning; lifelong learning; NEURAL-NETWORKS;
D O I
10.1109/TPAMI.2024.3429383
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep models, e.g., CNNs and Vision Transformers, have achieved impressive achievements in many vision tasks in the closed world. However, novel classes emerge from time to time in our ever-changing world, requiring a learning system to acquire new knowledge continually. Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally and build a universal classifier among all seen classes. Correspondingly, when directly training the model with new class instances, a fatal problem occurs - the model tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades. There have been numerous efforts to tackle catastrophic forgetting in the machine learning community. In this paper, we survey comprehensively recent advances in class-incremental learning and summarize these methods from several aspects. We also provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms empirically. Furthermore, we notice that the current comparison protocol ignores the influence of memory budget in model storage, which may result in unfair comparison and biased results. Hence, we advocate fair comparison by aligning the memory budget in evaluation, as well as several memory-agnostic performance measures.
引用
收藏
页码:9851 / 9873
页数:23
相关论文
共 222 条
[1]  
Rusu AA, 2016, Arxiv, DOI [arXiv:1606.04671, DOI 10.48550/ARXIV.1606.04671, DOI 10.43550/ARXIV:1606.04671]
[2]   Conditional Channel Gated Networks for Task-Aware Continual Learning [J].
Abati, Davide ;
Tomczak, Jakub ;
Blankevoort, Tijmen ;
Calderara, Simone ;
Cucchiara, Rita ;
Bejnordi, Babak Ehteshami .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3930-3939
[3]   IIRC: Incremental Implicitly-Refined Classification [J].
Abdelsalam, Mohamed ;
Faramarzi, Mojtaba ;
Sodhani, Shagun ;
Chandar, Sarath .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :11033-11042
[4]   SS-IL: Separated Softmax for Incremental Learning [J].
Ahn, Hongjoon ;
Kwak, Jihwan ;
Lim, Subin ;
Bang, Hyeonsu ;
Kim, Hyojun ;
Moon, Taesup .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :824-833
[5]  
Alfassy A., 2022, Advances in Neural Information Processing Systems, V35, P29873
[6]  
Aljundi R, 2019, ADV NEUR IN, V32
[7]   Task-Free Continual Learning [J].
Aljundi, Rahaf ;
Kelchtermans, Klaas ;
Tuytelaars, Tinne .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11246-11255
[8]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[9]   Expert Gate: Lifelong Learning with a Network of Experts [J].
Aljundi, Rahaf ;
Chakravarty, Punarjay ;
Tuytelaars, Tinne .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :7120-7129
[10]  
Asadi Nader, P MACHINE LEARNING R