A robust and anti-forgettiable model for class-incremental learning

被引:1
|
作者
Chen, Jianting [1 ]
Xiang, Yang [1 ]
机构
[1] Tongji Univ, Coll Elect & Informat Engn, 4800 Caoan Highway, Shanghai 201804, Peoples R China
基金
中国国家自然科学基金;
关键词
Class incremental learning; Catastrophic forgetting; Batch normalization; Robust feature representation;
D O I
10.1007/s10489-022-04239-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In many real-world scenarios, neural network models are not always fixed; they are expected to adapt to a dynamic environment and incrementally learn new knowledge. However, catastrophic forgetting is a challenge for incremental learning in neural networks since updating the model parameters to incorporate new knowledge often results in performance degradation on previous tasks. In this paper, we focus on class-incremental learning (CIL) and attempt to mitigate catastrophic forgetting by improving the robustness of neural networks. Specifically, we modify two aspects of the models. First, we argue that plain batch normalization (BN) has a negative effect on CIL. Hence, we propose a variant BN, called noisy batch normalization (NBN), which introduces Gaussian noise to resist the impact of the change in feature distributions and improves feature representation robustness. Second, to address the task-level overfitting problem in CIL, we introduce a decoder-based regularization (DBR) term, which employs a decoder following the feature encoder to reconstruct the input. DBR can avoid overfitting of the current task and provide a distillation loss to retain the knowledge of previous tasks. We design two CIL scenarios and validate our approaches on the CIFAR-100, MiniImageNet, Fashion MNIST, and Omniglot datasets. The results show that the performance of CIL algorithms based on our approach is better than that of the original algorithms, indicating that our approach can enhance the model robustness and help the networks extract anti-forgettable feature representations.
引用
收藏
页码:14128 / 14145
页数:18
相关论文
共 50 条
  • [1] A robust and anti-forgettiable model for class-incremental learning
    Jianting Chen
    Yang Xiang
    Applied Intelligence, 2023, 53 : 14128 - 14145
  • [2] Class-Incremental Learning: A Survey
    Zhou, Da-Wei
    Wang, Qi-Wei
    Qi, Zhi-Hong
    Ye, Han-Jia
    Zhan, De-Chuan
    Liu, Ziwei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 9851 - 9873
  • [3] iNeMo: Incremental Neural Mesh Models for Robust Class-Incremental Learning
    Fischer, Tom
    Liu, Yaoyao
    Jesslen, Artur
    Ahmed, Noor
    Kaushik, Prakhar
    Wang, Angtian
    Yuille, Alan L.
    Kortylewski, Adam
    Ilg, Eddy
    COMPUTER VISION - ECCV 2024, PT LXXVII, 2024, 15135 : 357 - 374
  • [4] Model Behavior Preserving for Class-Incremental Learning
    Liu, Yu
    Hong, Xiaopeng
    Tao, Xiaoyu
    Dong, Songlin
    Shi, Jingang
    Gong, Yihong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (10) : 7529 - 7540
  • [5] Deep Learning for Class-Incremental Learning: A Survey
    Zhou D.-W.
    Wang F.-Y.
    Ye H.-J.
    Zhan D.-C.
    Jisuanji Xuebao/Chinese Journal of Computers, 2023, 46 (08): : 1577 - 1605
  • [6] DYNAMIC REPLAY TRAINING FOR CLASS-INCREMENTAL LEARNING
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 5915 - 5919
  • [7] A Class-Incremental Learning Method for PCB Defect Detection
    Ge, Quanbo
    Wu, Ruilin
    Wu, Yupei
    Liu, Huaping
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2025, 74
  • [8] Is Class-Incremental Enough for Continual Learning?
    Cossu, Andrea
    Graffieti, Gabriele
    Pellegrini, Lorenzo
    Maltoni, Davide
    Bacciu, Davide
    Carta, Antonio
    Lomonaco, Vincenzo
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [9] BEFM: A balanced and efficient fine-tuning model in class-incremental learning
    Liu, Lize
    Ji, Jian
    Zhao, Lei
    KNOWLEDGE-BASED SYSTEMS, 2025, 315
  • [10] A survey on few-shot class-incremental learning
    Tian, Songsong
    Li, Lusi
    Li, Weijun
    Ran, Hang
    Ning, Xin
    Tiwari, Prayag
    NEURAL NETWORKS, 2024, 169 : 307 - 324