CeCR: Cross-entropy contrastive replay for online class-incremental continual learning

被引:4
|
作者
Sun, Guanglu [1 ]
Ji, Baolun [1 ]
Liang, Lili [1 ]
Chen, Minghui [1 ]
机构
[1] Harbin Univ Sci & Technol, Sch Comp Sci & Technol, Harbin 150080, Peoples R China
关键词
Online class incremental learning; Catastrophic forgetting; Cross-entropy contrastive loss; Contrastive replay; Buffer management; NEURAL-NETWORKS; EFFICIENT;
D O I
10.1016/j.neunet.2024.106163
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Aiming at the realization of learning continually from an online data stream, replay -based methods have shown superior potential. The main challenge of replay -based methods is the selection of representative samples which are stored in the buffer and replayed. In this paper, we propose the Cross -entropy Contrastive Replay (CeCR) method in the online class -incremental setting. First, we present the Class -focused Memory Retrieval method that proceeds the class -level sampling without replacement. Second, we put forward the class -mean approximation memory update method that selectively replaces the mistakenly classified training samples with samples of current input batch. In addition, the Cross -entropy Contrastive Loss is proposed to implement the model training with obtaining more solid knowledge to achieve effective learning. Experiments show that the CeCR method has comparable or improved performance in two benchmark datasets in comparison with the state-of-the-art methods.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Anchor Assisted Experience Replay for Online Class-Incremental Learning
    Lin, Huiwei
    Feng, Shanshan
    Li, Xutao
    Li, Wentao
    Ye, Yunming
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (05) : 2217 - 2232
  • [2] Is Class-Incremental Enough for Continual Learning?
    Cossu, Andrea
    Graffieti, Gabriele
    Pellegrini, Lorenzo
    Maltoni, Davide
    Bacciu, Davide
    Carta, Antonio
    Lomonaco, Vincenzo
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [3] DYNAMIC REPLAY TRAINING FOR CLASS-INCREMENTAL LEARNING
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 5915 - 5919
  • [4] Squeezing More Past Knowledge for Online Class-Incremental Continual Learning
    Da Yu
    Mingyi Zhang
    Mantian Li
    Fusheng Zha
    Junge Zhang
    Lining Sun
    Kaiqi Huang
    IEEE/CAAJournalofAutomaticaSinica, 2023, 10 (03) : 722 - 736
  • [5] Squeezing More Past Knowledge for Online Class-Incremental Continual Learning
    Yu, Da
    Zhang, Mingyi
    Li, Mantian
    Zha, Fusheng
    Zhang, Junge
    Sun, Lining
    Huang, Kaiqi
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2023, 10 (03) : 722 - 736
  • [6] Contrastive Correlation Preserving Replay for Online Continual Learning
    Yu, Da
    Zhang, Mingyi
    Li, Mantian
    Zha, Fusheng
    Zhang, Junge
    Sun, Lining
    Huang, Kaiqi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (01) : 124 - 139
  • [7] General Federated Class-Incremental Learning With Lightweight Generative Replay
    Chen, Yuanlu
    Tan, Alysa Ziying
    Feng, Siwei
    Yu, Han
    Deng, Tao
    Zhao, Libang
    Wu, Feng
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (20): : 33927 - 33939
  • [8] Class-Incremental Continual Learning Into the eXtended DER-Verse
    Boschini, Matteo
    Bonicelli, Lorenzo
    Buzzega, Pietro
    Porrello, Angelo
    Calderara, Simone
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (05) : 5497 - 5512
  • [9] Continual prune-and-select: class-incremental learning with specialized subnetworks
    Aleksandr Dekhovich
    David M.J. Tax
    Marcel H.F Sluiter
    Miguel A. Bessa
    Applied Intelligence, 2023, 53 : 17849 - 17864
  • [10] Class-Incremental Learning: A Survey
    Zhou, Da-Wei
    Wang, Qi-Wei
    Qi, Zhi-Hong
    Ye, Han-Jia
    Zhan, De-Chuan
    Liu, Ziwei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 9851 - 9873