Adapt and Refine: A Few-Shot Class-Incremental Learner via Pre-Trained Models

被引:0
作者
Qiang, Sunyuan [1 ]
Xiong, Zhu [1 ]
Liang, Yanyan [1 ]
Wan, Jun [1 ,2 ,3 ]
Zhang, Du [1 ]
机构
[1] Macau Univ Sci & Technol, Macau, Peoples R China
[2] Chinese Acad Sci, Inst Automat, MAIS, Beijing, Peoples R China
[3] Univ Chinese Acad Sci, SAI, Beijing, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT 1 | 2025年 / 15031卷
关键词
Few-shot class-incremental learning (FSCIL) center dot; Catastrophic forgetting; Pre-trained models (PTMs);
D O I
10.1007/978-981-97-8487-5_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The intricate and ever-changing nature of the real world imposes greater demands on neural networks, necessitating the rapid assimilation of fleeting new concepts as they arise. Consequently, a novel learning paradigm has emerged, namely, few-shot class-incremental learning (FSCIL), which aims to continuously update knowledge of novel categories with insufficient instances while avoiding catastrophic forgetting of previous knowledge. However, recent FSCIL methods encountered significant performance limitations due to the low-quality latent representation spaces obtained from base session. To this end, this paper introduces a novel FSCIL method, Adapt and REfine (ARE). Specifically, ARE initially strengthens the latent space through the powerful representational capabilities of pre-trained models (PTMs). Subsequently, we further adapt and refine the feature space and prototypes to promote the enhancement of FSCIL performance. Extensive experiments on benchmarks such as CIFAR100, mini-ImageNet, and CUB200 validate the effectiveness of the proposed method.
引用
收藏
页码:431 / 444
页数:14
相关论文
共 60 条
[1]  
Chen K., INT C LEARN REPR ICL
[2]   A Continual Learning Survey: Defying Forgetting in Classification Tasks [J].
De Lange, Matthias ;
Aljundi, Rahaf ;
Masana, Marc ;
Parisot, Sarah ;
Jia, Xu ;
Leonardis, Ales ;
Slabaugh, Greg ;
Tuytelaars, Tinne .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (07) :3366-3385
[3]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[4]  
Dosovitskiy A., 2021, INT C LEARN REPR
[5]   PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning [J].
Douillard, Arthur ;
Cord, Matthieu ;
Ollion, Charles ;
Robert, Thomas ;
Valle, Eduardo .
COMPUTER VISION - ECCV 2020, PT XX, 2020, 12365 :86-102
[6]   CLIP-Adapter: Better Vision-Language Models with Feature Adapters [J].
Gao, Peng ;
Geng, Shijie ;
Zhang, Renrui ;
Ma, Teli ;
Fang, Rongyao ;
Zhang, Yongfeng ;
Li, Hongsheng ;
Qiao, Yu .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (02) :581-595
[7]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[8]  
Goswami D, 2023, ADV NEUR IN
[9]  
Gu X., 2022, INT C LEARN REPR ICL
[10]   Constrained Few-shot Class-incremental Learning [J].
Hersche, Michael ;
Karunaratne, Geethan ;
Cherubini, Giovanni ;
Benini, Luca ;
Sebastian, Abu ;
Rahimi, Abbas .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :9047-9057