Enhancing few-shot class-incremental learning through prototype optimization

被引:0
作者
Jiang, Mengjuan [1 ]
Fan, Jiaqing [1 ]
Li, Fanzhang [1 ]
机构
[1] Soochow Univ, Sch Comp Sci & Technol, 1 ShiZi St, Suzhou 215006, Jiangsu, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Few-shot learning; Incremental learning; Prototype optimization; Discriminative feature;
D O I
10.1007/s10489-025-06605-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot class-incremental learning (FSCIL) aims to incrementally learn new classes with limited samples while avoiding the forgetting of previously learned ones. Nevertheless, the limited samples in new classes often lead the model towards overfitting and may trigger catastrophic forgetting. In response to these challenges, we propose the Prototype Optimization-based Method (POM) for FSCIL. Given the critical role of prototypes in classification, POM aims to optimize classification performance from the perspective of enhancing prototype representativeness. First, the quality of prototypes is directly determined by the capacity of the feature extractor. A powerful feature extractor is capable of extracting more discriminative features, while the accuracy of the prototype relies on the high-quality features provided by the feature extractor. Therefore, in the basic training phase, we design a hybrid loss function to adequately train the feature extractor, enhancing the discriminative power of the prototype representation. Second, we propose a prototype optimization strategy that dynamically adjusts the positions of prototypes by identifying highly similar pairs in the feature space, ensuring sufficient separation between them and reducing confusion between prototypes of new and old classes. Experimental results on miniImageNet, CIFAR100, and CUB200 show that POM achieves outstanding performance across several key metrics, particularly in terms of accuracy and performance retention, significantly surpassing existing methods and demonstrating its effectiveness and advantages in incremental learning tasks. Compared to TOPIC, POM achieves a 20.40% improvement in average accuracy on miniImageNet.
引用
收藏
页数:19
相关论文
共 55 条
[1]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[2]  
[Anonymous], 2011, CALTECH UCSD BIRDS 2
[3]   End-to-End Incremental Learning [J].
Castro, Francisco M. ;
Marin-Jimenez, Manuel J. ;
Guil, Nicolas ;
Schmid, Cordelia ;
Alahari, Karteek .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :241-257
[4]  
Chaudhry A, 2019, Arxiv, DOI arXiv:1812.00420
[5]   Sharpness-aware gradient guidance for few-shot class-incremental learning [J].
Chen, Runhang ;
Jing, Xiao-Yuan ;
Wu, Fei ;
Chen, Haowen .
KNOWLEDGE-BASED SYSTEMS, 2024, 299
[6]   Synthesized Feature based Few-Shot Class-Incremental Learning on a Mixture of Subspaces [J].
Cheraghian, Ali ;
Rahman, Shafin ;
Ramasinghe, Sameera ;
Fang, Pengfei ;
Simon, Christian ;
Petersson, Lars ;
Harandi, Mehrtash .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :8641-8650
[7]   Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning [J].
Cheraghian, Ali ;
Rahman, Shafin ;
Fang, Pengfei ;
Roy, Soumava Kumar ;
Petersson, Lars ;
Harandi, Mehrtash .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :2534-2543
[8]   MetaFSCEL A Meta-Learning Approach for Few-Shot Class Incremental Learning [J].
Chi, Zhixiang ;
Gu, Li ;
Liu, Huan ;
Wang, Yang ;
Yu, Yuanhao ;
Tang, Jin .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :14146-14155
[9]   Uncertainty-Aware Distillation for Semi-Supervised Few-Shot Class-Incremental Learning [J].
Cui, Yawen ;
Deng, Wanxia ;
Chen, Haoyu ;
Liu, Li .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) :14259-14272
[10]  
Fernando C., 2017, arXiv