Class Incremental Learning: A Review and Performance Evaluation

被引:0
作者
Zhu F. [1 ,2 ]
Zhang X.-Y. [1 ,2 ]
Liu C.-L. [1 ,2 ]
机构
[1] National Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing
[2] School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing
来源
Zidonghua Xuebao/Acta Automatica Sinica | 2023年 / 49卷 / 03期
基金
中国国家自然科学基金;
关键词
catastrophic forgetting; continual learning; deep learning; Incremental learning; machine learning;
D O I
10.16383/j.aas.c220588
中图分类号
学科分类号
摘要
Machine learning has been successfully applied in many fields such as computer vision, natural language processing, and speech recognition. However, in the current machine learning systems, models are often fixed after training. Consequently, they can only generalize to classes that appear in the training set, and cannot learn newly emerged classes continuously. In real-world applications, new classes or tasks will appear continuously, which requires the model to continuously learn new knowledge without forgetting the knowledge of previous seen classes. The emerging research direction of class incremental learning aims to enable models to continuously learn new classes while preserving the discrimination ability of old classes (defying “catastrophic forgetting”) in the open and dynamic environment. This paper provides a comprehensive overview of class incremental learning (CIL) developed in recent years. Specifically, existing methods are grouped into five categories: parameter regularization based, knowledge distillation based, data replay based, feature replay based and network structure based methods. The advantages and disadvantages of each method are summarized. In addition, extensive experiments are conducted to evaluate and compare those representative methods on benchmark datasets. Finally, this paper prospects the future research directions of class incremental learning. © 2023 Science Press. All rights reserved.
引用
收藏
页码:635 / 660
页数:25
相关论文
共 161 条
[1]  
He K M, Zhang X Y, Ren S Q, Sun J., Deep residual learning for image recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, (2016)
[2]  
Feichtenhofer C, Fan H Q, Malik J, He K M., SlowFast networks for video recognition, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6201-6210, (2019)
[3]  
Qian Y M, Bi M X, Tan T, Yu K., Very deep convolutional neural networks for noise robust speech recognition, IEEE/ ACM Transactions on Audio, Speech, and Language Processing, 24, 12, pp. 2263-2276, (2016)
[4]  
Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Et al., Mastering the game of go without human knowledge, Nature, 550, 7676, pp. 354-359, (2017)
[5]  
Wurman P R, Barrett S, Kawamoto K, MacGlashan J, Subramanian K, Walsh T J, Et al., Outracing champion Gran Turismo drivers with deep reinforcement learning, Nature, 602, 7896, pp. 223-228, (2022)
[6]  
Redmon J, Divvala S, Girshick R, Farhadi A., You only look once: Unified, real-time object detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, (2016)
[7]  
Geng C X, Huang S J, Chen S C., Recent advances in open set recognition: A survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, 43, 10, pp. 3614-3631, (2021)
[8]  
Zhang X Y, Liu C L, Suen C Y., Towards robust pattern recognition: A review, Proceedings of the IEEE, 108, 6, pp. 894-922, (2020)
[9]  
Hadsell R, Rao D, Rusu A A, Pascanu R., Embracing change: Continual learning in deep neural networks, Trends in Cognitive Sciences, 24, 12, pp. 1028-1040, (2020)
[10]  
Parisi G I, Kemker R, Part J L, Kanan C, Wermter S., Continual lifelong learning with neural networks: A review, Neural Networks, 113, pp. 54-71, (2019)