Compressing Deep Model With Pruning and Tucker Decomposition for Smart Embedded Systems

被引:18
作者
Dai, Cheng [1 ]
Liu, Xingang [2 ]
Cheng, Hongqiang [2 ]
Yang, Laurence T. [3 ]
Deen, M. Jamal [4 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Chengdu 610017, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
[3] St Francis Xavier Univ, Dept Comp Sci, Antigonish, NS B2G 2W5, Canada
[4] McMaster Univ, Dept Elect Engn & Comp Sci, Hamilton, ON L8S 4K1, Canada
基金
中国国家自然科学基金;
关键词
Computational modeling; Deep learning; Bayes methods; Internet of Things; Data models; Streaming media; Edge computing; Deep model compression; parameter pruning; smart embedded systems; Tucker decomposition (TD); IOT;
D O I
10.1109/JIOT.2021.3116316
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning has been proved to be one of the most effective method in feature encoding for different intelligent applications such as video-based human action recognition. However, its nonconvex optimization mechanism leads large memory consumption, which hinders its deployment on the smart embedded systems with limited computational resources. To overcome this challenge, we propose a novel deep model compression technique for smart embedded systems, which realizes both the memory size reduction and inference complexity decrease within a small drop of accuracy. First, we propose an improved naive Bayes inference-based channel parameter pruning to obtain a sparse model with higher accuracy. Then, to improve the inference efficiency, the improved Tucker decomposition method is proposed, where an improved genetic algorithm is used to optimize the Tucker ranks. Finally, to elevate the effectiveness of our proposed method, extensive experiments are conducted. The experimental results show that our method can achieve the state-of-the-art performance compared with existing methods in terms of accuracy, parameter compression, and floating-point operations reduction.
引用
收藏
页码:14490 / 14500
页数:11
相关论文
共 56 条
[41]   ThiNet: Pruning CNN Filters for a Thinner Net [J].
Luo, Jian-Hao ;
Zhang, Hao ;
Zhou, Hong-Yu ;
Xie, Chen-Wei ;
Wu, Jianxin ;
Lin, Weiyao .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (10) :2525-2538
[42]   Cloud-Edge-Based Lightweight Temporal Convolutional Networks for Remaining Useful Life Prediction in IIoT [J].
Ren, Lei ;
Liu, Yuxin ;
Wang, Xiaokang ;
Lu, Jinhu ;
Deen, M. Jamal .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (16) :12578-12587
[43]   A Wide-Deep-Sequence Model-Based Quality Prediction Method in Industrial Process Analysis [J].
Ren, Lei ;
Meng, Zihao ;
Wang, Xiaokang ;
Lu, Renquan ;
Yang, Laurence T. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (09) :3721-3731
[44]   Sparse low rank factorization for deep neural network compression [J].
Swaminathan, Sridhar ;
Garg, Deepak ;
Kannan, Rajkumar ;
Andres, Frederic .
NEUROCOMPUTING, 2020, 398 :185-196
[45]   SOME MATHEMATICAL NOTES ON 3-MODE FACTOR ANALYSIS [J].
TUCKER, LR .
PSYCHOMETRIKA, 1966, 31 (03) :279-279
[46]   A Tensor-Based Multiattributes Visual Feature Recognition Method for Industrial Intelligence [J].
Wang, Xiaokang ;
Yang, Laurence Tianruo ;
Song, Liwen ;
Wang, Huihui ;
Ren, Lei ;
Deen, Jamal .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (03) :2231-2241
[47]   ADTT: A Highly Efficient Distributed Tensor-Train Decomposition Method for IIoT Big Data [J].
Wang, Xiaokang ;
Yang, Laurence T. ;
Wang, Yihao ;
Ren, Lei ;
Deen, M. Jamal .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (03) :1573-1582
[48]  
Xiao X., 2019, ADV NEUR IN, P13681
[49]  
Yang H., 2020, P INT C LEARN REPR
[50]   Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning [J].
Yang, Tien-Ju ;
Chen, Yu-Hsin ;
Sze, Vivienne .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6071-6079