Reconstructed SqueezeNext with C CBAM for offline handwritten Chinese character recognition

被引:1
作者
Wu, Ruiqi [1 ]
Zhou, Feng [1 ]
Li, Nan [1 ]
Liu, Xian [1 ]
Wang, Rugang [1 ]
机构
[1] Yancheng Inst Technol, Sch Informat Technol, Yancheng, Jiangsu, Peoples R China
关键词
CNN; Lightweight model; Character recognition; Attention model; CONVOLUTIONAL NEURAL-NETWORK;
D O I
10.7717/peerj-cs.1529
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Background. Handwritten Chinese character recognition (HCCR) is a difficult prob-lem in character recognition. Chinese characters are diverse and many of them are very similar. The HCCR model consumes a large number of computational resources during runtime, making it difficult to deploy to resource-limited development platforms.Methods. In order to reduce the computational consumption and improve the operational efficiency of such models, an improved lightweight HCCR model is proposed in this article. We reconstructed the basic modules of the SqueezeNext network so that the model would be compatible with the introduced attention module and model compression techniques. The proposed Cross-stage Convolutional Block Attention Module (C-CBAM) redeploys the Spatial Attention Module (SAM) and the Channel Attention Module (CAM) according to the feature map characteristics of the deep and shallow layers of the model, targeting enhanced information interaction between the deep and shallow layers. The reformulated intra-stage convolutional kernel importance assessment criterion integrates the normalization nature of the weights and allows for structured pruning in equal proportions for each stage of the model. The quantization aware training is able to map the 32-bit floating-point weights in the pruned model to 8-bit fixed-point weights with minor loss.Results. Pruning with the new convolutional kernel importance evaluation criterion proposed in this article can achieve a pruning rate of 50.79% with little impact on the accuracy rate. The various optimization methods can compress the model to 1.06 MB and achieve an accuracy of 97.36% on the CASIA-HWDB dataset. Compared with the initial model, the volume is reduced by 87.15%, and the accuracy is improved by 1.71%. The model proposed in this article greatly reduces the running time and storage requirements of the model while maintaining accuracy.
引用
收藏
页数:24
相关论文
共 44 条
[1]  
Cai Ruichu, 2018, Journal of Computer Applications, V38, P2449, DOI 10.11772/j.issn.1001-9081.2018020477
[2]  
Chen WL, 2015, PR MACH LEARN RES, V37, P2285
[3]  
Cheng R, 2022, INTELLIGENT COMPUTER, V12, P160
[4]  
Courbariaux M, 2016, Arxiv, DOI arXiv:1602.02830
[5]  
Denton E, 2014, ADV NEUR IN, V27
[6]   Designing efficient accelerator of depthwise separable convolutional neural network on FPGA [J].
Ding, Wei ;
Huang, Zeyu ;
Huang, Zunkai ;
Tian, Li ;
Wang, Hui ;
Feng, Songlin .
JOURNAL OF SYSTEMS ARCHITECTURE, 2019, 97 :278-286
[7]  
Howard AG, 2017, Arxiv, DOI [arXiv:1704.04861, DOI 10.48550/ARXIV.1704.04861, 10.48550/arXiv.1704.04861]
[8]  
Gholami A, 2018, Arxiv, DOI [arXiv:1803.10615, DOI 10.48550/ARXIV.1803.10615]
[9]  
Han S, 2016, Arxiv, DOI arXiv:1510.00149
[10]  
Han S, 2015, ADV NEUR IN, V28