Reconstructed SqueezeNext with C CBAM for offline handwritten Chinese character recognition

被引:1
作者
Wu, Ruiqi [1 ]
Zhou, Feng [1 ]
Li, Nan [1 ]
Liu, Xian [1 ]
Wang, Rugang [1 ]
机构
[1] Yancheng Inst Technol, Sch Informat Technol, Yancheng, Jiangsu, Peoples R China
关键词
CNN; Lightweight model; Character recognition; Attention model; CONVOLUTIONAL NEURAL-NETWORK;
D O I
10.7717/peerj-cs.1529
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Background. Handwritten Chinese character recognition (HCCR) is a difficult prob-lem in character recognition. Chinese characters are diverse and many of them are very similar. The HCCR model consumes a large number of computational resources during runtime, making it difficult to deploy to resource-limited development platforms.Methods. In order to reduce the computational consumption and improve the operational efficiency of such models, an improved lightweight HCCR model is proposed in this article. We reconstructed the basic modules of the SqueezeNext network so that the model would be compatible with the introduced attention module and model compression techniques. The proposed Cross-stage Convolutional Block Attention Module (C-CBAM) redeploys the Spatial Attention Module (SAM) and the Channel Attention Module (CAM) according to the feature map characteristics of the deep and shallow layers of the model, targeting enhanced information interaction between the deep and shallow layers. The reformulated intra-stage convolutional kernel importance assessment criterion integrates the normalization nature of the weights and allows for structured pruning in equal proportions for each stage of the model. The quantization aware training is able to map the 32-bit floating-point weights in the pruned model to 8-bit fixed-point weights with minor loss.Results. Pruning with the new convolutional kernel importance evaluation criterion proposed in this article can achieve a pruning rate of 50.79% with little impact on the accuracy rate. The various optimization methods can compress the model to 1.06 MB and achieve an accuracy of 97.36% on the CASIA-HWDB dataset. Compared with the initial model, the volume is reduced by 87.15%, and the accuracy is improved by 1.71%. The model proposed in this article greatly reduces the running time and storage requirements of the model while maintaining accuracy.
引用
收藏
页数:24
相关论文
共 44 条
[11]   Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration [J].
He, Yang ;
Liu, Ping ;
Wang, Ziwei ;
Hu, Zhilan ;
Yang, Yi .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4335-4344
[12]  
Hu HY, 2016, Arxiv, DOI arXiv:1607.03250
[13]  
Hua S, 2020, ELECT PROD, P6
[14]  
Jacob B, 2017, Arxiv, DOI arXiv:1712.05877
[15]  
[金连文 Jin Lianwen], 2016, [自动化学报, Acta Automatica Sinica], V42, P1125
[16]   Pruning Ratio Optimization with Layer-Wise Pruning Method for Accelerating Convolutional Neural Networks [J].
Kamma, Koji ;
Inoue, Sarimu ;
Wada, Toshikazu .
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (01) :161-169
[17]  
Krishnamoorthi R, 2018, Arxiv, DOI arXiv:1806.08342
[18]  
[李国强 Li Guoqiang], 2020, [计算机工程与应用, Computer Engineering and Application], V56, P163
[19]  
Li H., 2017, P INT C LEARNING REP, P1
[20]  
Li H, 2017, Arxiv, DOI arXiv:1608.08710