SSIT: a sample selection-based incremental model training method for image recognition

被引:0
作者
Yichuan Zhang
Yadi Liu
Guangming Yang
Jie Song
机构
[1] Northeastern University,
来源
Neural Computing and Applications | 2022年 / 34卷
关键词
Sample selection; Incremental training; Recognition gain; Category imbalance; Image recognition;
D O I
暂无
中图分类号
学科分类号
摘要
In the big data environment, the expansion of image data sets makes the image recognition process need to adapt to sample characteristics and data distribution changes. In this case, image recognition research focuses on finding the balance point of incremental learning in the stability-plasticity dilemma under limited computing and storage resources. The existing incremental learning methods have disadvantages in generalization performance, iteration rounds, convergence speed, and data category imbalance, so it is essential to study the incremental learning methods for image recognition training. In this study, a sample selection-based incremental model training method is proposed for image recognition. The training process is improved by optimizing the training samples needed for each iteration. A generalization error-based category determination method is proposed to avoid the imbalance of training samples. A sample selection method based on dynamic weight is proposed to avoid the problem of increasing recognition gain. At last, experiments show that this method can enhance the generalization ability of the model. At the same time, it can meet the goal of balancing the recognition effect, reducing the number of iterations, and accelerating convergence.
引用
收藏
页码:3117 / 3134
页数:17
相关论文
共 50 条
[1]  
Bittencourt Marciele M(2020)ML-MDLText: An efficient and lightweight multilabel text classifier with incremental learning Appl. Soft Comput. 96 106699-16
[2]  
Silva Renato Moraes(2012)The digital universe in 2020: big data, bigger digital shadows, and biggest growth in the far east[J] IDC iView: IDC Analyze the future 2012 1-647
[3]  
Almeida Tiago A(2020)Enhancing learning efficiency of brain storm optimization via orthogonal learning design IEEE transactions on systems Man, and Cybernetics: Systems. 55 640-135
[4]  
Gantz J(2018)Small sample image recognition using improved convolutional neural network J Vis Commun Image Represent 3 128-880
[5]  
Reinsel D(2021)An Adaptive Localized Decision Variable Analysis Approach to Large-Scale Multi-objective and Many-objective Optimization IEEE Transactions on Cybernetics 49 861-162
[6]  
Ma Lianbo(1999)Catastrophic forgetting in connectionist networks[J] Trends Cogn Sci 42 143-1531
[7]  
Cheng Shi(2019)Two-level master-slave RFID networks planning via hybrid multi-objective artificial bee colony optimizer IEEE Transactions on Systems, Man, and Cybernetics: Systems 22 1517-43
[8]  
Shi Yuhui(2014)Cooperative artificial bee colony algorithm for multi-objective RFID network planning J Netw Comput Appl 2015 31-188
[9]  
Zhang J(2011)Incremental Learning of Concept Drift in Nonstationary Environments IEEE Trans Neural Networks 33 181-508
[10]  
Shao K(2014)Incremental max-margin learning for semi-supervised multi-class problem SNPD 31 497-61