Memorizing Complementation Network for Few-Shot Class-Incremental Learning

被引:25
作者
Ji, Zhong [1 ,2 ]
Hou, Zhishen [1 ,2 ]
Liu, Xiyao [1 ,2 ,3 ,4 ]
Pang, Yanwei [1 ,2 ]
Li, Xuelong [5 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Tianjin Univ, Tianjin Key Lab Brain Inspired Intelligence Techno, Tianjin 300072, Peoples R China
[3] Chinese Acad Sci, Shenyang Inst Automat, State Key Lab Robot, Shenyang 110016, Peoples R China
[4] Chinese Acad Sci, Inst Robot & Intelligent Mfg, Shenyang 110169, Peoples R China
[5] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Power capacitors; Ensemble learning; Knowledge engineering; Feature extraction; Adaptation models; Training; Few-shot learning; class-incremental learning; ensemble learning; memorizing complementation;
D O I
10.1109/TIP.2023.3236160
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot Class-Incremental Learning (FSCIL) aims at learning new concepts continually with only a few samples, which is prone to suffer the catastrophic forgetting and overfitting problems. The inaccessibility of old classes and the scarcity of the novel samples make it formidable to realize the trade-off between retaining old knowledge and learning novel concepts. Inspired by that different models memorize different knowledge when learning novel concepts, we propose a Memorizing Complementation Network (MCNet) to ensemble multiple models that complements the different memorized knowledge with each other in novel tasks. Additionally, to update the model with few novel samples, we develop a Prototype Smoothing Hard-mining Triplet (PSHT) loss to push the novel samples away from not only each other in current task but also the old distribution. Extensive experiments on three benchmark datasets, e.g., CIFAR100, miniImageNet and CUB200, have demonstrated the superiority of our proposed method.
引用
收藏
页码:937 / 948
页数:12
相关论文
共 47 条
[1]   End-to-End Incremental Learning [J].
Castro, Francisco M. ;
Marin-Jimenez, Manuel J. ;
Guil, Nicolas ;
Schmid, Cordelia ;
Alahari, Karteek .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :241-257
[2]   Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence [J].
Chaudhry, Arslan ;
Dokania, Puneet K. ;
Ajanthan, Thalaiyasingam ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 :556-572
[3]  
Chen W., 2019, P INT C LEARN REPR, P1
[4]   Synthesized Feature based Few-Shot Class-Incremental Learning on a Mixture of Subspaces [J].
Cheraghian, Ali ;
Rahman, Shafin ;
Ramasinghe, Sameera ;
Fang, Pengfei ;
Simon, Christian ;
Petersson, Lars ;
Harandi, Mehrtash .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :8641-8650
[5]   Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning [J].
Cheraghian, Ali ;
Rahman, Shafin ;
Fang, Pengfei ;
Roy, Soumava Kumar ;
Petersson, Lars ;
Harandi, Mehrtash .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :2534-2543
[6]   Few-shot Image Classification: Just Use a Library of Pre-trained Feature Extractors and a Simple Classifier [J].
Chowdhury, Arkabandhu ;
Jiang, Mingchao ;
Chaudhuri, Swarat ;
Jermaine, Chris .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9425-9434
[7]   ConViT: improving vision transformers with soft convolutional inductive biases [J].
d'Ascoli, Stephane ;
Touvron, Hugo ;
Leavitt, Matthew L. ;
Morcos, Ari S. ;
Biroli, Giulio ;
Sagun, Levent .
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2022, 2022 (11)
[8]   Low-Rank Embedded Ensemble Semantic Dictionary for Zero-Shot Learning [J].
Ding, Zhengming ;
Shao, Ming ;
Fu, Yun .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6005-6013
[9]  
Dosovitskiy A., 2021, IMAGE IS WORTH 1616
[10]   Diversity with Cooperation: Ensemble Methods for Few-Shot Classification [J].
Dvornik, Nikita ;
Schmid, Cordelia ;
Mairal, Julien .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3722-3730