Self-supervised Contrastive Feature Refinement for Few-Shot Class-Incremental Learning

被引:0
作者
Ma, Shengjin [1 ]
Yuan, Wang [2 ]
Wang, Yiting [1 ]
Tan, Xin [1 ]
Zhang, Zhizhong [1 ]
Ma, Lizhuang [1 ,2 ]
机构
[1] East China Normal Univ, Shanghai, Peoples R China
[2] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
来源
COMPUTER-AIDED DESIGN AND COMPUTER GRAPHICS, CAD/GRAPHICS 2023 | 2024年 / 14250卷
基金
中国国家自然科学基金;
关键词
Few-shot class-incremental learning; Virtual class augmentation; Self-supervised learning; Feature distribution recall;
D O I
10.1007/978-981-99-9666-7_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-Shot Class-Incremental Learning (FSCIL) is to learn novel classes with few data points incrementally, without forgetting old classes. It is very hard to capture the underlying patterns and traits of the few-shot classes. To meet the challenges, we propose a Self-supervised Contrastive Feature Refinement (SCFR) framework which tackles the FSCIL issue from three aspects. Firstly, we employ a self-supervised learning framework to make the network to learn richer representations and promote feature refinement. Meanwhile, we design virtual classes to improve the models robustness and generalization during training process. To prevent catastrophic forgetting, we attach Gaussian Noise to encountered prototypes to recall the distribution of known classes and maintain stability in the embedding space. SCFR offers a systematic solution which can effectively mitigate the issues of catastrophic forgetting and over-fitting. Experiments on widely recognized datasets, including CUB200, miniImageNet and CIFAR100, show remarkable performance than other mainstream works.
引用
收藏
页码:281 / 294
页数:14
相关论文
共 38 条
[1]  
Rusu AA, 2019, Arxiv, DOI arXiv:1807.05960
[2]  
Rusu AA, 2016, Arxiv, DOI [arXiv:1606.04671, DOI 10.48550/ARXIV.1606.04671, DOI 10.43550/ARXIV:1606.04671]
[3]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[4]   End-to-End Incremental Learning [J].
Castro, Francisco M. ;
Marin-Jimenez, Manuel J. ;
Guil, Nicolas ;
Schmid, Cordelia ;
Alahari, Karteek .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :241-257
[5]   Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence [J].
Chaudhry, Arslan ;
Dokania, Puneet K. ;
Ajanthan, Thalaiyasingam ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 :556-572
[6]  
Chen T, 2020, PR MACH LEARN RES, V119
[7]   Self-Supervised GANs via Auxiliary Rotation Loss [J].
Chen, Ting ;
Zhai, Xiaohua ;
Ritter, Marvin ;
Lucic, Mario ;
Houlsby, Neil .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :12146-12155
[8]   MetaFSCEL A Meta-Learning Approach for Few-Shot Class Incremental Learning [J].
Chi, Zhixiang ;
Gu, Li ;
Liu, Huan ;
Wang, Yang ;
Yu, Yuanhao ;
Tang, Jin .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :14146-14155
[9]  
Finn C, 2017, PR MACH LEARN RES, V70
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778