Class-Incremental Learning Method Based on Feature Space Augmented Replay and Bias Correction

被引:0
|
作者
Sun, Xiaopeng [1 ]
Yu, Lu [1 ]
Xu, Changsheng [2 ]
机构
[1] School of Computer Science and Engineering, Tianjin University of Technology, Tianjin
[2] State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing
来源
Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence | 2024年 / 37卷 / 08期
基金
中国国家自然科学基金;
关键词
Catastrophic Forgetting; Class Incremental Learning; Continuous Learning; Feature Enhancement; Feature Representation;
D O I
10.16451/j.cnki.issn1003-6059.202408006
中图分类号
学科分类号
摘要
The problem of catastrophic forgetting arises when the network learns new knowledge continuously. Various incremental learning methods are proposed to solve this problem and one mainstream approach is to balance the plasticity and stability of incremental learning through storing a small amount of old data and replaying it. However, storing data from old tasks can lead to memory limitations and privacy breaches. To address this issue, a class-incremental learning method based on feature space augmented replay and bias correction is proposed to alleviate catastrophic forgetting. Firstly, the mean feature of an intermediate layer for each class is stored as its representative prototype and the low-level feature extraction network is frozen to prevent prototype drift. In the incremental learning stage, the stored prototypes are enhanced and replayed through geometric translation transformation to maintain the decision boundaries of the previous task. Secondly, bias correction is proposed to learn classification weights for each task, further correcting the problem of model classification bias towards new tasks. Experiments on four benchmark datasets show that the proposed method outperforms the state-of-the-art algorithms. © 2024 Science Press. All rights reserved.
引用
收藏
页码:729 / 740
页数:11
相关论文
共 35 条
  • [1] HE K M, ZHANG X Y, REN S Q, Et al., Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification, Proc of the IEEE International Conference on Computer Vision, pp. 1026-1034, (2015)
  • [2] KIRKPATRICK J, PASCANU R, RABINOWITZ N, Et al., Overcoming Catastrophic Forgetting in Neural Networks, Proceedings of the National Academy of Sciences, 114, 13, pp. 3521-3526, (2017)
  • [3] KEMKER R, MCCLURE M, ABITINO A, Et al., Measuring Catastrophic Forgetting in Neural Networks, Proceedings of the AAAI Conference on Artificial Intelligence, 32, 1, pp. 3390-3398, (2018)
  • [4] LI G P, XU Y, DING J, Et al., Toward Generic and Controllable Attacks Against Object Detection, IEEE Transactions on Geoscience and Remote Sensing, (2024)
  • [5] JEEVESWARAN K, BHAT P S, ZONOOZ B, Et al., BiRT: Bio-inspired Replay in Vision Transformers for Continual Learning, Journal of Machine Learning Research, 202, pp. 14817-14835, (2023)
  • [6] CHEN X W, CHANG X B., Dynamic Residual Classifier for Class Incremental Learning, Proc of the IEEE/ CVF International Conference on Computer Vision, pp. 18697-18706, (2023)
  • [7] ZHOU D W, WANG F Y, YE H J, Et al., Deep Learning for Class-Incremental Learning: A Survey, Chinese Journal of Computers, 46, 8, pp. 1577-1605, (2023)
  • [8] MASANA M, LIU X L, TWARDOWSKI B, Et al., Class-Incremental Learning: Survey and Performance Evaluation on Image Classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, 45, 5, pp. 5513-5533, (2023)
  • [9] ZHOU D W, WANG Q W, QI Z H, Et al., Deep Class-Incremental Learning: A Survey
  • [10] GOU J P, YU B S, MAYBANK S J, Et al., Knowledge Distillation: A Survey, International Journal of Computer Vision, 129, pp. 1789-1819, (2021)