Lightweight deep network for traffic sign classification

被引:177
作者
Zhang, Jianming [1 ,2 ]
Wang, Wei [1 ,2 ]
Lu, Chaoquan [1 ,2 ]
Wang, Jin [1 ,2 ]
Sangaiah, Arun Kumar [3 ]
机构
[1] Changsha Univ Sci & Technol, Sch Comp & Commun Engn, Changsha 410114, Hunan, Peoples R China
[2] Changsha Univ Sci & Technol, Hunan Prov Key Lab Intelligent Proc Big Data Tran, Changsha 410114, Peoples R China
[3] Vellore Inst Technol, Sch Comp Sci & Engn, Vellore 632014, Tamil Nadu, India
基金
中国国家自然科学基金;
关键词
Convolutional neural networks; Traffic sign classification; Knowledge distillation; Network pruning; RECOGNITION;
D O I
10.1007/s12243-019-00731-9
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Deeper neural networks have achieved great results in the field of computer vision and have been successfully applied to tasks such as traffic sign recognition. However, as traffic sign recognition systems are often deployed in resource-constrained environments, it is critical for the network design to be slim and accurate in these instances. Accordingly, in this paper, we propose two novel lightweight networks that can obtain higher recognition precision while preserving less trainable parameters in the models. Knowledge distillation transfers the knowledge in a trained model, called the teacher network, to a smaller model, called the student network. Moreover, to improve the accuracy of traffic sign recognition, we also implement a new module in our teacher network that combines two streams of feature channels with dense connectivity. To enable easy deployment on mobile devices, our student network is a simple end-to-end architecture containing five convolutional layers and a fully connected layer. Furthermore, by referring to the values of batch normalization (BN) scaling factors towards zero to identify insignificant channels, we prune redundant channels from the student network, yielding a compact model with accuracy comparable to that of more complex models. Our teacher network exhibited an accuracy rate of 93.16% when trained and tested on the CIFAR-10 general dataset. Using the knowledge of our teacher network, we train the student network on the GTSRB and BTSC traffic sign datasets. Thus, our student model uses only 0.8 million parameters while still achieving accuracy of 99.61% and 99.13% respectively on both datasets. All experimental results show that our lightweight networks can be useful when deploying deep convolutional neural networks (CNNs) on mobile embedded devices.
引用
收藏
页码:369 / 379
页数:11
相关论文
共 43 条
[1]  
[Anonymous], 2013, P INT JOINT C NEUR N
[2]   Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods [J].
Arcos-Garcia, Alvaro ;
Alvarez-Garcia, Juan A. ;
Soria-Morillo, Luis M. .
NEURAL NETWORKS, 2018, 99 :158-165
[3]   A novel online incremental and decremental learning algorithm based on variable support vector machine [J].
Chen, Yuantao ;
Xiong, Jie ;
Xu, Weihong ;
Zuo, Jingwen .
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 3) :S7435-S7445
[4]   The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier [J].
Chen, Yuantao ;
Xu, Weihong ;
Zuo, Jingwen ;
Yang, Kai .
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 3) :S7665-S7675
[5]  
Ciresan D, 2012, PROC CVPR IEEE, P3642, DOI 10.1109/CVPR.2012.6248110
[6]  
Denil M., 2013, Advances in Neural Information Processing Systems, V26
[7]  
Denton Emily L., 2014, ADV NEURAL INFORM PR, P1269
[8]   Fast R-CNN [J].
Girshick, Ross .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1440-1448
[9]  
HE K, 2016, P C COMP VIS PATT RE, DOI [DOI 10.1007/978-3-319-46493-0_38, 10.1007/978-3-319-46493-0_38, DOI 10.1109/CVPR.2016.90]
[10]   Channel Pruning for Accelerating Very Deep Neural Networks [J].
He, Yihui ;
Zhang, Xiangyu ;
Sun, Jian .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1398-1406