Incremental Learning for Online Data Using QR Factorization on Convolutional Neural Networks

被引:0
作者
Kim, Jonghong [1 ]
Lee, Wonhee [1 ,2 ]
Baek, Sungdae [3 ]
Hong, Jeong-Ho [1 ,2 ,4 ]
Lee, Minho [3 ]
机构
[1] Keimyung Univ, Dongsan Hosp, Sch Med, Dept Obstet & Gynecol, Daegu 42601, South Korea
[2] Keimyung Univ, Dept Med Informat, Sch Med, Daegu 42601, South Korea
[3] Kyungpook Natl Univ, Grad Sch Artificial Intelligence, Daegu 41566, South Korea
[4] Biolink Inc, Daegu 42601, South Korea
关键词
image processing; incremental learning; convolutional neural network; deep learning; artificial intelligence; compressed sensing; RECOGNITION; STABILITY;
D O I
10.3390/s23198117
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Catastrophic forgetting, which means a rapid forgetting of learned representations while learning new data/samples, is one of the main problems of deep neural networks. In this paper, we propose a novel incremental learning framework that can address the forgetting problem by learning new incoming data in an online manner. We develop a new incremental learning framework that can learn extra data or new classes with less catastrophic forgetting. We adopt the hippocampal memory process to the deep neural networks by defining the effective maximum of neural activation and its boundary to represent a feature distribution. In addition, we incorporate incremental QR factorization into the deep neural networks to learn new data with both existing labels and new labels with less forgetting. The QR factorization can provide the accurate subspace prior, and incremental QR factorization can reasonably express the collaboration between new data with both existing classes and new class with less forgetting. In our framework, a set of appropriate features (i.e., nodes) provides improved representation for each class. We apply our method to the convolutional neural network (CNN) for learning Cifar-100 and Cifar-10 datasets. The experimental results show that the proposed method efficiently alleviates the stability and plasticity dilemma in the deep neural networks by providing the performance stability of a trained network while effectively learning unseen data and additional new classes.
引用
收藏
页数:14
相关论文
共 40 条
[1]  
Aljundi R, 2019, ADV NEUR IN, V32
[2]  
Axelsson O., 1996, Iterative Solution Methods
[3]   FUZZY ART - FAST STABLE LEARNING AND CATEGORIZATION OF ANALOG PATTERNS BY AN ADAPTIVE RESONANCE SYSTEM [J].
CARPENTER, GA ;
GROSSBERG, S ;
ROSEN, DB .
NEURAL NETWORKS, 1991, 4 (06) :759-771
[4]  
Chaudhry A., 2019, arXiv
[5]  
French R. M., 1992, Connection Science, V4, P365, DOI 10.1080/09540099208946624
[6]  
Fu Ying-hua, 2008, Journal of Computer Applications, V28, P2300
[7]   R-DFCIL: Relation-Guided Representation Learning for Data-Free Class Incremental Learning [J].
Gao, Qiankun ;
Zhao, Chen ;
Ghanem, Bernard ;
Zhang, Jian .
COMPUTER VISION, ECCV 2022, PT XXIII, 2022, 13683 :423-439
[8]  
Glorot Xavier., 2011, Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. JMLR WCP, V15, P315, DOI DOI 10.1002/ECS2.1832
[9]  
GROSSBERG S, 1987, COGNITIVE SCI, V11, P23, DOI 10.1111/j.1551-6708.1987.tb00862.x
[10]  
Hammouda B., 2016, PROBING NANOSCALE ST, P1