Mural classification model based on high- and low-level vision fusion

被引:5
作者
Cao, Jianfang [1 ,2 ]
Cui, Hongyan [1 ]
Zhang, Zibang [1 ]
Zhao, Aidi [1 ]
机构
[1] Taiyuan Univ Sci & Technol, Sch Comp Sci & Technol, Taiyuan 030024, Peoples R China
[2] Xinzhou Teachers Univ, Dept Comp Sci & Technol, 10 Peace West St, Xinzhou 034000, Peoples R China
关键词
Vggnet model; Transfer learning; Mural classification; Feature fusion; Low-level features; SVM classifier;
D O I
10.1186/s40494-020-00464-2
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
The rapid classification of ancient murals is a pressing issue confronting scholars due to the rich content and information contained in images. Convolutional neural networks (CNNs) have been extensively applied in the field of computer vision because of their excellent classification performance. However, the network architecture of CNNs tends to be complex, which can lead to overfitting. To address the overfitting problem for CNNs, a classification model for ancient murals was developed in this study on the basis of a pretrained VGGNet model that integrates a depth migration model and simple low-level vision. First, we utilized a data enhancement algorithm to augment the original mural dataset. Then, transfer learning was applied to adapt a pretrained VGGNet model to the dataset, and this model was subsequently used to extract high-level visual features after readjustment. These extracted features were fused with the low-level features of the murals, such as color and texture, to form feature descriptors. Last, these descriptors were input into classifiers to obtain the final classification outcomes. The precision rate, recall rate and F1-score of the proposed model were found to be 80.64%, 78.06% and 78.63%, respectively, over the constructed mural dataset. Comparisons with AlexNet and a traditional backpropagation (BP) network illustrated the effectiveness of the proposed method for mural image classification. The generalization ability of the proposed method was proven through its application to different datasets. The algorithm proposed in this study comprehensively considers both the high- and low-level visual characteristics of murals, consistent with human vision.
引用
收藏
页数:18
相关论文
共 34 条
[1]   Convolutional neural networks for archaeological site detection - Finding "princely" tombs [J].
Caspari, Gino ;
Crespo, Pablo .
JOURNAL OF ARCHAEOLOGICAL SCIENCE, 2019, 110
[2]   EEG emotion recognition model based on the LIBSVM classifier [J].
Chen, Tian ;
Ju, Sihang ;
Ren, Fuji ;
Fan, Mingyan ;
Gu, Yu .
MEASUREMENT, 2020, 164
[3]   Remote Sensing Image Scene Classification Using Bag of Convolutional Features [J].
Cheng, Gong ;
Li, Zhenpeng ;
Yao, Xiwen ;
Guo, Lei ;
Wei, Zhongliang .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2017, 14 (10) :1735-1739
[4]   Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images [J].
Cheng, Phillip M. ;
Malhi, Harshawn S. .
JOURNAL OF DIGITAL IMAGING, 2017, 30 (02) :234-243
[5]   A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition [J].
Fuentes, Alvaro ;
Yoon, Sook ;
Kim, Sang Cheol ;
Park, Dong Sun .
SENSORS, 2017, 17 (09)
[6]   Plant identification using deep neural networks via optimization of transfer learning parameters [J].
Ghazi, Mostafa Mehdipour ;
Yanikoglu, Berrin ;
Aptoula, Erchan .
NEUROCOMPUTING, 2017, 235 :228-235
[7]  
Hao Y., 2016, THESIS
[8]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[9]  
[黄凯奇 Huang Kaiqi], 2014, [计算机学报, Chinese Journal of Computers], V37, P1225
[10]   An effective method to detect and categorize digitized traditional Chinese paintings [J].
Jiang, SQ ;
Huang, QM ;
Ye, QX ;
Gao, W .
PATTERN RECOGNITION LETTERS, 2006, 27 (07) :734-746