Incremental Model Enhancement via Memory-based Contrastive Learning

被引:1
作者
Xuan, Shiyu [1 ]
Yang, Ming [2 ]
Zhang, Shiliang [1 ,3 ]
机构
[1] Peking Univ, Sch Comp Sci, State Key Lab Multimedia Informat Proc, Beijing 100871, Peoples R China
[2] Ant Grp, Multimodal Cognit, Seattle, WA USA
[3] Peng Cheng Lab, Shenzhen 518055, Peoples R China
关键词
Incremental learning; Contrastive loss; Memory bank; Knowledge distillation;
D O I
10.1007/s11263-024-02138-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training data of many vision tasks may be sequentially arrived in practice, e.g., the vision tasks in autonomous driving or video surveillance applications. This raises a fundamental challenge that, how to keep improving the performance on a specific task by learning from sequentially available training splits. This paper investigates this task as Incremental Model Enhancement (IME). IME is distinct from the conventional Incremental Learning (IL), where each training split typically corresponds to a set of independent classes, domains, or tasks. In IME, each training split may only cover part of the entire data distribution for the target vision task. Consequently, the IME model should be optimized towards the joint distribution of all available training splits, instead of optimizing towards each newly arrived one like IL methods. To deal with above issues, our method stores feature vectors of previously observed training data in the memory bank, which preserves compressed knowledge of the previous training data. We hence adopt the memorized features and each newly arrived training split for training via Memory-based Contrastive Learning (MCL). A new Contrastive Relation Preserving (CRP) scheme updates the memory bank to prevent obsoleteness of the preserved features and works with MCL simultaneously to boost the model performance. Experiments on several large-scale image classification benchmarks demonstrate the effectiveness of our method. Our method also works well on semantic segmentation, showing strong generalization ability on diverse vision tasks.
引用
收藏
页码:65 / 83
页数:19
相关论文
共 71 条
[1]  
Aljundi Rahaf, 2019, arXiv
[2]   Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer [J].
Ashok, Arjun ;
Joseph, K. J. ;
Balasubramanian, Vineeth N. .
COMPUTER VISION - ECCV 2022, PT XXVII, 2022, 13687 :105-122
[3]   Rainbow Memory: Continual Learning with a Memory of Diverse Samples [J].
Bang, Jihwan ;
Kim, Heesu ;
Yoo, YoungJoon ;
Ha, Jung-Woo ;
Choi, Jonghyun .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :8214-8223
[4]   IL2M: Class Incremental Learning With Dual Memory [J].
Belouadah, Eden ;
Popescu, Adrian .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :583-592
[5]  
Bobu A., 2018, PROC ICLR WORKSHOP, P1
[6]  
Buzzega P, 2020, Arxiv, DOI [arXiv:2004.07211, 10.48550/arXiv.2004.07211, DOI 10.48550/ARXIV.2004.07211]
[7]   Co2L: Contrastive Continual Learning [J].
Cha, Hyuntak ;
Lee, Jaeho ;
Shin, Jinwoo .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9496-9505
[8]  
Chen LC, 2017, Arxiv, DOI [arXiv:1706.05587, 10.48550/arXiv.1706.05587]
[9]  
Chen Ting, 2019, INT C MACH LEARN, DOI DOI 10.22489/CINC.2017.065-469
[10]  
Chen XL, 2020, Arxiv, DOI arXiv:2003.04297