Hyper-feature aggregation and relaxed distillation for class incremental learning

被引:3
作者
Wu, Ran [1 ]
Liu, Huanyu [1 ]
Yue, Zongcheng [2 ]
Li, Jun-Bao [1 ]
Sham, Chiu-Wing [2 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci, Yikuang St, Harbin 150001, Heilongjiang Pr, Peoples R China
[2] Univ Auckland, Sch Comp Sci, Princes St, Auckland 1062, New Zealand
基金
中国国家自然科学基金;
关键词
Class incremental learning; Relaxed knowledge distillation; Hyper-feature aggregation;
D O I
10.1016/j.patcog.2024.110440
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although neural networks have been used extensively in pattern recognition scenarios, the pre -acquisition of datasets is still challenging. In most pattern recognition areas, preparing a training dataset that covers all data domains is difficult. Incremental learning was proposed to update neural networks in an online manner, but the catastrophic forgetting issue still needs to be studied. Class -incremental learning is one of the most challenging incremental learning contexts; it trains a unified model to classify all incrementally arriving classes learned thus far equally. Prior studies on class -incremental learning favor model stability over plasticity to realize old knowledge reservation and prevent catastrophic forgetting. Consequently, the model's plasticity is omitted, leading to difficult generalization on new data. We propose a novel distillation -based method named Hyper -feature Aggregation and Relaxed Distillation (HARD) to realize balanced optimization of old and new knowledge. The aggregation of features is proposed to capture the global semantics while maintaining the diversity of the feature distribution after promoting representations of exemplars to higher dimensions. The proposed algorithm also introduces a relaxed restriction in the hyper -feature space to conditions the hyperfeature space through a normalized comparison of the relation matrices. Following generalization on more classes, the model is encouraged to rebuild the feature distribution when meeting new classes and to fine-tune the feature space to realize more distinct interclass boundaries. Extensive experiments were conducted on two benchmark datasets, and consistent improvements under diverse experimental settings demonstrated the effectiveness of the proposed approach.
引用
收藏
页数:10
相关论文
共 32 条
  • [1] SS-IL: Separated Softmax for Incremental Learning
    Ahn, Hongjoon
    Kwak, Jihwan
    Lim, Subin
    Bang, Hyeonsu
    Kim, Hyojun
    Moon, Taesup
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 824 - 833
  • [2] Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer
    Ashok, Arjun
    Joseph, K. J.
    Balasubramanian, Vineeth N.
    [J]. COMPUTER VISION - ECCV 2022, PT XXVII, 2022, 13687 : 105 - 122
  • [3] Buzzega Matteo, 2020, Advances in Neural Information Processing Systems, P15920
  • [4] Rebalancing Batch Normalization for Exemplar-based Class-Incremental Learning
    Cha, Sungmin
    Cho, Sungjun
    Hwang, Dasol
    Hong, Sunwon
    Lee, Moontae
    Moon, Taesup
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 20127 - 20136
  • [5] Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence
    Chaudhry, Arslan
    Dokania, Puneet K.
    Ajanthan, Thalaiyasingam
    Torr, Philip H. S.
    [J]. COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 : 556 - 572
  • [6] Chen T, 2020, PR MACH LEARN RES, V119
  • [7] Exploring Simple Siamese Representation Learning
    Chen, Xinlei
    He, Kaiming
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15745 - 15753
  • [8] Douillard Arthur, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12365), P86, DOI 10.1007/978-3-030-58565-5_6
  • [9] Catastrophic forgetting in connectionist networks
    French, RM
    [J]. TRENDS IN COGNITIVE SCIENCES, 1999, 3 (04) : 128 - 135
  • [10] Knowledge aggregation networks for class incremental learning
    Fu, Zhiling
    Wang, Zhe
    Xu, Xinlei
    Li, Dongdong
    Yang, Hai
    [J]. PATTERN RECOGNITION, 2023, 137