Incremental Learning Based on Neuron Regularization and Resource Releasing

被引:1
作者
Mo J. [1 ]
Zhu Y. [1 ]
Yuan H. [1 ]
Lin L. [1 ]
Huang S. [1 ]
机构
[1] School of Information and Communication, Guilin University of Electronic Technology, Guilin
来源
Huanan Ligong Daxue Xuebao/Journal of South China University of Technology (Natural Science) | 2022年 / 50卷 / 06期
基金
中国国家自然科学基金;
关键词
Catastrophic forgetting; Deep learning; Fixed capacity environment; Incremental learning; Neuron regularization; Resource releasing mechanism;
D O I
10.12141/j.issn.1000-565X.210404
中图分类号
学科分类号
摘要
Aiming at the catastrophic forgetting problem caused by the image classification of deep learning systems in an incremental scene, this paper proposed an incremental learning algorithm which based on neuron regularization and resource releasing mechanism. This method is based on the framework of Bayesian neural network. Firstly, the input weights was grouped by neurons and the standard deviation of weights was restricted to the same value according to the groups.Secondly, in the training process, the corresponding strength regularization was performed for the weights of each group according to the standard deviation after unification. Finally, the model was guided to selectively dilute the regular intensity of some weights to maintain the learning ability of the model by introducing the parameters that determine the release ratio and release strength into the loss function. Experiments on several common datasets show that the proposed method can explore the continuous learning capability of the model more effectively, and a better model can be learned even in a fixed capacity environment. © 2022, Editorial Department, Journal of South China University of Technology. All right reserved.
引用
收藏
页码:71 / 79and90
页数:7919
相关论文
共 19 条
[1]  
HASSABIS D, KUMARAN D, SUMMERFIEL D C, Et al., Neuroscience-inspired artificial intelligence [J], Neuron, 95, 2, pp. 245-258, (2017)
[2]  
GERMAN I P, KEMKER R, PART J L, Et al., Continual lifelong learning with neural networks:a review[J], Neural Networks, 113, pp. 54-71, (2019)
[3]  
REBUFF S A, KOLESNIKOV A, SPERL G, Et al., iCaRL:incremental classifier and representation lear-ning, Proceedings of 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5533-5542, (2017)
[4]  
YE X, YING F, PAN J, Et al., Incremental learning using conditional adversarial networks[C], Procee-dings of 2019 IEEE/CVF International Conference on Computer Vision, pp. 6618-6627, (2019)
[5]  
ZHAI M Y, CHEN L, TUNG F, Et al., Lifelong GAN:continual learning for conditional image generation, Proceedings of 2019 IEEE/CVF International Conference on Computer Vision, pp. 2759-2768, (2019)
[6]  
CAI S S, XU Z W, HUANG Z C, Et al., Enhancing CNN incremental learning capability with an expanded network [C], Proceedings of IEEE International Conference on Multimedia and Expo, pp. 1-6, (2018)
[7]  
ZOU Guo-feng, FU Gui-xia, WANG Ke-jun, Et al., Construction method of adaptive deep convolutional neural network model, Journal of Beijing University of Posts and Telecommunications, 40, 4, pp. 98-103, (2017)
[8]  
MALLYA A, LAZEBNIK S., PackNet:adding multiple tasks to a single network by iterative pruning, Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7766-7773, (2018)
[9]  
SERRA J, SURIS D, MIRON M, Et al., Overcoming catastrophic forgetting with hard attention to the task, Proceedings of the 35th International Conference on Machine Learning, pp. 4548-4557, (2018)
[10]  
KIRKPATRICK J, PASCANU R, RABINOWITZ N, Et al., Overcoming catastrophic forgetting in neural networks [J], Proceedings of the National Academy of Sciences, 114, 13, pp. 3521-3526, (2017)