Channel Pruning Method for Signal Modulation Recognition Deep Learning Models

被引:9
作者
Chen, Zhuangzhi [1 ,2 ]
Wang, Zhangwei [3 ]
Gao, Xuzhang [3 ]
Zhou, Jinchao [3 ]
Xu, Dongwei [3 ]
Zheng, Shilian [4 ]
Xuan, Qi [5 ,6 ,7 ]
Yang, Xiaoniu [4 ]
机构
[1] Zhejiang Univ Technol, Inst Cyberspace Secur, Hangzhou 310023, Peoples R China
[2] Zhejiang Univ Technol, Coll Comp Sci & Engn, Hangzhou 310023, Peoples R China
[3] Zhejiang Univ Technol, Inst Cyberspace Secur, Hangzhou 310023, Peoples R China
[4] Sci & Technol Commun Informat Secur Control Lab, Jiaxing 314033, Peoples R China
[5] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Informat Engn, Hangzhou 310023, Peoples R China
[6] PCL Res Ctr Networks & Commun, Peng Cheng Lab, Shenzhen 518000, Peoples R China
[7] Utron Technol Co Ltd, Hangzhou 310056, Peoples R China
关键词
Neural networks; Convolution; Computational modeling; Modulation; Feature extraction; Deep learning; Load modeling; Automatic modulation recognition; deep learning; neural network model pruning; edge devices; CLASSIFICATION;
D O I
10.1109/TCCN.2023.3329000
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Automatic modulation recognition (AMR) plays an important role in communication system. With the expansion of data volume and the development of computing power, deep learning framework shows great potential in AMR. However, deep learning models suffer from the heavy resource consumption problem caused by the huge amount of parameters and high computational complexity, which limit their performance in scenarios that require fast response. Therefore, the deep learning models must be compressed and accelerated, where channel pruning is an effective method to reduce the amount of computation and can speed up models inference. In this paper, we propose a new channel pruning method suitable for AMR deep learning models. We consider both the channel redundancy of the convolutional layer and the channel importance measured by the $\gamma $ scale factor of the batch normalization (BN) layer. Our proposed method jointly evaluates the model channels from the perspectives of structural similarity and numerical value, and generates evaluation indicators for selecting channels. This method can prevent cutting out important convolutional layer channels. And combined with other strategies such as one-shot pruning strategy and local pruning strategy, the model classification performance can be guaranteed further. We demonstrate the effectiveness of our approach on a variety of different AMR models. Compared with other classical pruning methods, the proposed method can not only better maintain the classification accuracy, but also achieve a higher compression ratio. Finally, we deploy the pruned network model to edge devices, validating the significant acceleration effect of our method.
引用
收藏
页码:442 / 453
页数:12
相关论文
共 57 条
[1]   Automatic modulation classification based on high order cumulants and hierarchical polynomial classifiers [J].
Abdelmutalab, Ameen ;
Assaleh, Khaled ;
El-Tarhuni, Mohamed .
PHYSICAL COMMUNICATION, 2016, 21 :10-18
[2]  
Blalock D., 2020, P MACHINE LEARNING S, V2, P129
[3]   "Learning-Compression" Algorithms for Neural Net Pruning [J].
Carreira-Perpinan, Miguel A. ;
Idelbayev, Yerlan .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8532-8541
[4]   Deep Learning for Large-Scale Real-World ACARS and ADS-B Radio Signal Classification [J].
Chen, Shichuan ;
Zheng, Shilian ;
Yang, Lifeng ;
Yang, Xiaoniu .
IEEE ACCESS, 2019, 7 :89256-89264
[5]   Individual Identification Using the Functional Brain Fingerprint Detected by the Recurrent Neural Network [J].
Chen, Shiyang ;
Hu, Xiaoping .
BRAIN CONNECTIVITY, 2018, 8 (04) :197-204
[6]   SigNet: A Novel Deep Learning Framework for Radio Signal Classification [J].
Chen, Zhuangzhi ;
Cui, Hui ;
Xiang, Jingyang ;
Qiu, Kunfeng ;
Huang, Liang ;
Zheng, Shilian ;
Chen, Shichuan ;
Xuan, Qi ;
Yang, Xiaoniu .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (02) :529-541
[7]  
Frankle J, 2018, arXiv, DOI DOI 10.48550/ARXIV.1803.03635
[8]   Network Pruning via Performance Maximization [J].
Gao, Shangqian ;
Huang, Feihu ;
Cai, Weidong ;
Huang, Heng .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :9266-9276
[9]   A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration [J].
Ghimire, Deepak ;
Kil, Dayoung ;
Kim, Seong-heum .
ELECTRONICS, 2022, 11 (06)
[10]   Knowledge Distillation: A Survey [J].
Gou, Jianping ;
Yu, Baosheng ;
Maybank, Stephen J. ;
Tao, Dacheng .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (06) :1789-1819