Rethinking Few-Shot Class-Incremental Learning With Open-Set Hypothesis in Hyperbolic Geometry

被引:2
作者
Cui, Yawen [1 ]
Yu, Zitong [2 ]
Peng, Wei [3 ]
Tian, Qi [4 ]
Liu, Li [5 ,6 ]
机构
[1] Univ Oulu, CMVS, Oulu 90570, Finland
[2] Great Bay Univ, Dongguan 523000, Guangdong, Peoples R China
[3] Stanford Univ, CNSlab, Stanford, CA 94305 USA
[4] Xidian Univ, Xian 710071, Peoples R China
[5] Natl Univ Def Technol NUDT, Coll Elect Sci, Changsha 410073, Peoples R China
[6] Univ Oulu, Ctr Machine Vis & Signal Anal CMVS, Oulu 90570, Finland
关键词
Few-shot learning; class-incremental learning; hyperbolic deep neural network; open-set recognition; knowledge distillation;
D O I
10.1109/TMM.2023.3340550
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
By training first with a large base dataset, Few-Shot Class-Incremental Learning (FSCIL) aims at continually learning a sequence of few-shot learning tasks with novel classes. There are mainly two challenges in FSCIL: the overfitting issue of novel classes with limited labeled samples and the catastrophic forgetting of previously seen classes. The current protocol of FSCIL is built by mimicking the general class-incremental learning setting by building a unified framework, while the existing frameworks for FSCIL on this protocol always bias to the classes in the base dataset because the dominant performance of the deep model is decided by the size of the training dataset. Moreover, it is difficult to handle the stability-plasticity constraint in a unified FSCIL framework. To solve these issues, we rethink the configuration of FSCIL with the open-set hypothesis by reserving the possibility in the first session for incoming categories. To find a better decision boundary of close space and open space, Hyperbolic Reciprocal Point Learning module (Hyper-RPL) is built on Reciprocal Point Learning with hyperbolic neural networks. Besides, when learning novel categories from limited labeled data, we incorporate a hyperbolic metric learning (Hyper-Metric) module into the distillation-based framework to alleviate the overfitting issue and better handle the trade-off issue between the preservation of old knowledge and the acquisition of new knowledge. Finally, the comprehensive assessments of the proposed configuration and modules on three benchmark datasets are executed to validate the effectiveness, and state-of-the-art results are achieved.
引用
收藏
页码:5897 / 5910
页数:14
相关论文
共 67 条
[11]   Synthesized Feature based Few-Shot Class-Incremental Learning on a Mixture of Subspaces [J].
Cheraghian, Ali ;
Rahman, Shafin ;
Ramasinghe, Sameera ;
Fang, Pengfei ;
Simon, Christian ;
Petersson, Lars ;
Harandi, Mehrtash .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :8641-8650
[12]   Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning [J].
Cheraghian, Ali ;
Rahman, Shafin ;
Fang, Pengfei ;
Roy, Soumava Kumar ;
Petersson, Lars ;
Harandi, Mehrtash .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :2534-2543
[13]   MetaFSCEL A Meta-Learning Approach for Few-Shot Class Incremental Learning [J].
Chi, Zhixiang ;
Gu, Li ;
Liu, Huan ;
Wang, Yang ;
Yu, Yuanhao ;
Tang, Jin .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :14146-14155
[14]   Uncertainty-Guided Semi-Supervised Few-Shot Class-Incremental Learning With Knowledge Distillation [J].
Cui, Yawen ;
Deng, Wanxia ;
Xu, Xin ;
Liu, Zhen ;
Liu, Zhong ;
Pietikainen, Matti ;
Liu, Li .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :6422-6435
[15]   Learning Relative Feature Displacement for Few-Shot Open-Set Recognition [J].
Deng, Shule ;
Yu, Jin-Gang ;
Wu, Zihao ;
Gao, Hongxia ;
Li, Yansheng ;
Yang, Yang .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :5763-5774
[16]  
Dong SL, 2021, AAAI CONF ARTIF INTE, V35, P1255
[17]  
Finn C, 2017, PR MACH LEARN RES, V70
[18]   Weakly Supervised Few-Shot Segmentation via Meta-Learning [J].
Gama, Pedro H. T. ;
Oliveira, Hugo ;
Marcato Jr, Jose ;
dos Santos, Jefersson A. .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :1784-1797
[19]  
Ganea OE, 2018, ADV NEUR IN, V31
[20]  
Ge Z, 2017, Proc. of the British Machine Vision Conf