An Improved Convolutional Neural Network Algorithm and Its Application in Multilabel Image Labeling

被引:8
作者
Cao, Jianfang [1 ,2 ]
Wu, Chenyan [2 ]
Chen, Lichao [2 ]
Cui, Hongyan [2 ]
Feng, Guoqing [1 ]
机构
[1] Xinzhou Teachers Univ, Dept Comp Sci & Technol, Xinzhou 034000, Peoples R China
[2] Taiyuan Univ Sci & Technol, Sch Comp Sci & Technol, Taiyuan 030024, Shanxi, Peoples R China
关键词
Compendex;
D O I
10.1155/2019/2060796
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
In today's society, image resources are everywhere, and the number of available images can be overwhelming. Determining how to rapidly and effectively query, retrieve, and organize image information has become a popular research topic, and automatic image annotation is the key to text-based image retrieval. If the semantic images with annotations are not balanced among the training samples, the low-frequency labeling accuracy can be poor. In this study, a dual-channel convolution neural network (DCCNN) was designed to improve the accuracy of automatic labeling. The model integrates two convolutional neural network (CNN) channels with different structures. One channel is used for training based on the low-frequency samples and increases the proportion of low-frequency samples in the model, and the other is used for training based on all training sets. In the labeling process, the outputs of the two channels are fused to obtain a labeling decision. We verified the proposed model on the Caltech-256, Pascal VOC 2007, and Pascal VOC 2012 standard datasets. On the Pascal VOC 2012 dataset, the proposed DCCNN model achieves an overall labeling accuracy of up to 93.4% after 100 training iterations: 8.9% higher than the CNN and 15% higher than the traditional method. A similar accuracy can be achieved by the CNN only after 2,500 training iterations. On the 50,000-image dataset from Caltech-256 and Pascal VOC 2012, the performance of the DCCNN is relatively stable; it achieves an average labeling accuracy above 93%. In contrast, the CNN reaches an accuracy of only 91% even after extended training. Furthermore, the proposed DCCNN achieves a labeling accuracy for low-frequency words approximately 10% higher than that of the CNN, which further verifies the reliability of the proposed model in this study.
引用
收藏
页数:12
相关论文
共 29 条
[1]  
Alkaoud M., 2014, INT J IMAGE GRAPHICS, V2, P59, DOI DOI 10.12720/joig.2.1.59-63
[2]   Automatic image annotation using semi-supervised generative modeling [J].
Amiri, S. Hamid ;
Jamzad, Mansour .
PATTERN RECOGNITION, 2015, 48 (01) :174-188
[3]   A multi-expert based framework for automatic image annotation [J].
Bahrololoum, Abbas ;
Nezamabadi-pour, Hossein .
PATTERN RECOGNITION, 2017, 61 :169-184
[4]  
Bouvrie J., 2006, NOTES CONVOLUTIONAL
[5]   A survey and analysis on automatic image annotation [J].
Cheng, Qimin ;
Zhang, Qian ;
Fu, Peng ;
Tu, Conghuan ;
Li, Sen .
PATTERN RECOGNITION, 2018, 79 :242-259
[6]  
Chu Z. Y., 2014, J COMPUTER SCI, V37, P1390
[7]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[8]  
Duygulu P, 2002, LECT NOTES COMPUT SC, V2353, P97
[9]  
Everingham M., 2012, The pascal visual object classes challenge 2012 (voc2012) results
[10]   The Pascal Visual Object Classes (VOC) Challenge [J].
Everingham, Mark ;
Van Gool, Luc ;
Williams, Christopher K. I. ;
Winn, John ;
Zisserman, Andrew .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) :303-338