Few-Shot Class-Incremental Learning Based on Feature Distribution Learning

被引:0
作者
Yao, Guangle [1 ,2 ]
Zhu, Juntao [2 ]
Zhou, Wenlong [2 ]
Zhang, Guiyu [1 ,3 ]
Zhang, Wei [4 ,5 ]
Zhang, Qian [5 ]
机构
[1] Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan, Yibin
[2] School of Computer and Network Security, Chengdu University of Technology, Chengdu
[3] School of Automation & Information Engineering, Sichuan University of Science & Engineering, Sichuan, Yibin
[4] School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu
[5] Science and Technology on Electronic Information Control Laboratory, Chengdu
关键词
deep neural network; few-shot class-incremental learning; incremental learning;
D O I
10.3778/j.issn.1002-8331.2205-0385
中图分类号
学科分类号
摘要
This paper focuses on a very challenging problem:few-shot class-incremental learning for deep neural networks, where the deep neural network model can gradually learn new knowledge from a small number of samples without forgetting the learned knowledge. To balance the model’s memory of old knowledge and learning of new knowledge, it proposes a few-shot class-incremental learning method based on feature distribution learning. First, it learns the model on the base classes to obtain a well-performing feature extractor and take the feature distribution information as the learned knowledge. Then, it maps the learned knowledge together with the features of the novel classes into a low-dimensional subspace to review old knowledge and learn new knowledge uniformly. Finally, within the subspace, it also generates classification weight initializations for each novel class to improve the adaptability of the model to novel classes. Extensive experiments show that the method can effectively alleviate the model’s forgetting of learned knowledge and improve the model’s adaptability to new knowledge. © 2023 Journal of Computer Engineering and Applications Beijing Co., Ltd.; Science Press. All rights reserved.
引用
收藏
页码:151 / 157
页数:6
相关论文
共 21 条
[1]  
TAO X, HONG X,, CHANG X, Et al., Few-shot class-incremental learning[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12180-12189, (2020)
[2]  
DOUILLARD A, CORD M, Et al., PODNet:pooled outputs distillation for small-tasks incremental learning[C], European Conference on Computer Vision, (2020)
[3]  
GIDARIS S, KOMODAKIS N., Generating classification weights with GNN denoising autoencoders for few-shot learning[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 21-30, (2019)
[4]  
HONG X, SHI W,, Et al., Analogy-detail networks for object recognition[J], IEEE Transactions on Neural Networks and Learning Systems, 32, pp. 4404-4418, (2021)
[5]  
RAJASEGARAN J, KHAN S, HAYAT M, Et al., iTAML:an incremental task-agnostic meta-learning approach[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 13585-13594, (2020)
[6]  
YU L, TWARDOWSKI B,, LIU X, Et al., Semantic drift compensation for class-incremental learning[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6980-6989, (2020)
[7]  
CHERAGHIAN A, RAHMAN S, FANG P., Semantic-aware knowledge distillation for few-shot class-incremental learning[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2534-2543, (2021)
[8]  
HAN J D, LI Y J., Survey of catastrophic forgetting research in neural network models[J], Journal of Beijing University of Technology, 47, 5, pp. 551-564, (2021)
[9]  
REBUFFI S A, KOLESNIKOV A, SPERL G, Et al., iCaRL:incremental classifier and representation learning[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5533-5542, (2017)
[10]  
HOU S, LOY C C, Et al., Learning a unified classifier incrementally via rebalancing[C], Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 831-839, (2019)