Contrastive Correlation Preserving Replay for Online Continual Learning

被引:8
|
作者
Yu, Da [1 ]
Zhang, Mingyi [2 ,3 ]
Li, Mantian [1 ]
Zha, Fusheng [1 ]
Zhang, Junge [2 ,3 ]
Sun, Lining [1 ]
Huang, Kaiqi [2 ,3 ,4 ]
机构
[1] Harbin Inst Technol HIT, State Key Lab Robot & Syst, Harbin 150080, Peoples R China
[2] Chinese Acad Sci CASIA, Inst Automat, Ctr Res Intelligent Syst & Engn, Beijing 100190, Peoples R China
[3] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[4] CAS Ctr Excellence Brain Sci & Intelligence Techno, Shanghai 200031, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Correlation; Knowledge transfer; Training; Memory management; Data models; Mutual information; Continual learning; catastrophic forgetting; class-incremental learning; experience replay; KNOWLEDGE;
D O I
10.1109/TCSVT.2023.3285221
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Online Continual Learning (OCL), as a core step towards achieving human-level intelligence, aims to incrementally learn and accumulate novel concepts from streaming data that can be seen only once, while alleviating catastrophic forgetting on previously acquired knowledge. Under this mode, the model needs to learn new classes or tasks in an online manner, and the data distribution may change over time. Moreover, task boundaries and identities are not available during training and evaluation. To balance the stability and plasticity of networks, in this work, we propose a replay-based framework for OCL, named Contrastive Correlation Preserving Replay (CCPR), which focuses on not only instances but also correlations between multiple instances. Specifically, besides the previous raw samples, the corresponding representations are stored in the memory and used to construct correlations for the past and the current model. To better capture correlation and higher-order dependencies, we maximize the low bound of mutual information between the past correlation and the current correlation by leveraging contrastive objectives. Furthermore, to improve the performance, we propose a new memory update strategy, which simultaneously encourages the balance and diversity of samples within the memory. With limited memory slots, it allows less redundant and more representative samples for later replay. We conduct extensive evaluations on several popular CL datasets, and experiments show that our method consistently outperforms the state-of-the-art methods and can effectively consolidate knowledge to alleviate forgetting.
引用
收藏
页码:124 / 139
页数:16
相关论文
共 50 条
  • [1] Hierarchical Correlations Replay for Continual Learning
    Wang, Qiang
    Liu, Jiayi
    Ji, Zhong
    Pang, Yanwei
    Zhang, Zhongfei
    KNOWLEDGE-BASED SYSTEMS, 2022, 250
  • [2] CeCR: Cross-entropy contrastive replay for online class-incremental continual learning
    Sun, Guanglu
    Ji, Baolun
    Liang, Lili
    Chen, Minghui
    NEURAL NETWORKS, 2024, 173
  • [3] Continual Pedestrian Trajectory Learning With Social Generative Replay
    Wu, Ya
    Bighashdel, Ariyan
    Chen, Guang
    Dubbelman, Gijs
    Jancura, Pavol
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (02) : 848 - 855
  • [4] Memory Enhanced Replay for Continual Learning
    Xu, Guixun
    Guo, Wenhui
    Wang, Yanjiang
    2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 218 - 222
  • [5] CONTRASTIVE LEARNING FOR ONLINE SEMI-SUPERVISED GENERAL CONTINUAL LEARNING
    Michel, Nicolas
    Negrel, Romain
    Chierchia, Giovanni
    Bercher, Jean-Francois
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1896 - 1900
  • [6] Chameleon: Dual Memory Replay for Online Continual Learning on Edge Devices
    Aggarwal, Shivam
    Binici, Kuluhan
    Mitra, Tulika
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (06) : 1663 - 1676
  • [7] Replay-Based Continual Learning for Test Case Prioritization
    Fariha, Asma
    Azim, Akramul
    Liscano, Ramiro
    2024 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS, ICSTW 2024, 2024, : 106 - 107
  • [8] CCL: Continual Contrastive Learning for LiDAR Place Recognition
    Cui, Jiafeng
    Chen, Xieyuanli
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (08) : 4433 - 4440
  • [9] Relational Experience Replay: Continual Learning by Adaptively Tuning Task-Wise Relationship
    Wang, Quanziang
    Wang, Renzhen
    Li, Yuexiang
    Wei, Dong
    Wang, Hong
    Ma, Kai
    Zheng, Yefeng
    Meng, Deyu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 9683 - 9698
  • [10] Marginal Replay vs Conditional Replay for Continual Learning
    Lesort, Timothee
    Gepperth, Alexander
    Stoian, Andrei
    Filliat, David
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: DEEP LEARNING, PT II, 2019, 11728 : 466 - 480