Lightweight Automatic Modulation Classification Based on Decentralized Learning

被引:58
作者
Fu, Xue [1 ]
Gui, Guan [1 ]
Wang, Yu [1 ]
Ohtsuki, Tomoaki [2 ]
Adebisi, Bamidele [3 ]
Gacanin, Haris [4 ]
Adachi, Fumiyuki [5 ]
机构
[1] Nanjing Univ Posts & Telecommun, Coll Telecommun & Informat Engn, Nanjing 210003, Peoples R China
[2] Keio Univ, Dept Informat & Comp Sci, Yokohama, Kanagawa 2238521, Japan
[3] Manchester Metropolitan Univ, Dept Engn, Fac Sci & Engn, Manchester M1 5GD, Lancs, England
[4] Rhein Westfal TH Aachen, Inst Commun Technol & Embedded Syst, D-52074 Aachen, Germany
[5] Tohoku Univ, Res Org Elect Commun, Sendai, Miyagi 9808577, Japan
基金
日本学术振兴会; 中国国家自然科学基金;
关键词
Automatic modulation classification (AMC); convolutional neural network (CNN); centralized learning; decentralized learning; DEEP NEURAL-NETWORK; RESOURCE-ALLOCATION; RECOGNITION; COMPRESSION; ALGORITHM; SYSTEMS; CNN;
D O I
10.1109/TCCN.2021.3089178
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Due to the implementation and performance limitations of centralized learning automatic modulation classification (CentAMC) method, this paper proposes a decentralized learning AMC (DecentAMC) method using model consolidation and lightweight design. Specifically, the model consolidation is realized by a central device (CD) for edge device (ED) model averaging (MA) and multiple EDs for ED model training. The lightweight is designed by separable convolutional neural network (S-CNN), in which the separable convolutional layer is utilized to replace the standard convolution layer and most of fully connected layers are cut off. Simulation results show that the proposed method substantially reduces the storage and computational capacity requirements of the EDs and communication overhead. The training efficiency also shows remarkable improvement. Compared with convolutional neural network (CNN), the space complexity (i.e., model parameters and output feature map) is decreased by about 94% and the time complexity (i.e., floating point operations) of S-CNN is decreased by about 96% while degrading the average correct classification probability by less than 1%. Compared with S-CNN-based CentAMC, without considering model weights uploading and downloading, the training efficiency of our proposed method is about N times of it, where N is the number of EDs. Considering the model weights uploading and downloading, the training efficiency of our proposed method can still be maintained at a high level (e.g., when the number of EDs is 12, the training efficency of the proposed AMC method is about 4 times that of S-CNN-based CentAMC in dataset D-1 = {2FSK, 4FSK, 8FSK, BPSK, QPSK, 8PSK, 16QAM} and about 5 times that of S-CNN-based CentAMC in dataset D-2 = {2FSK, 4FSK, 8FSK, BPSK, QPSK, 8PSK, PAM2, PAM4, PAM8, 16QAM}), while the communication overhead is reduced more than 35%.
引用
收藏
页码:57 / 70
页数:14
相关论文
共 49 条
[1]   Parallel and Distributed Machine Learning Algorithms for Scalable Big Data Analytics [J].
Bal, Henri ;
Pal, Arindam .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 108 :1159-1161
[2]   Demystifying Parallel and Distributed Deep Learning: An In-depth Concurrency Analysis [J].
Ben-Nun, Tal ;
Hoefler, Torsten .
ACM COMPUTING SURVEYS, 2019, 52 (04)
[3]  
Benedetto F., 2016, P 2016 IEEE 84 VEHIC, P1, DOI DOI 10.1109/VTCFALL.2016.7880915
[4]  
Cao Z., IEEE COMMUN LETT
[5]   Automatic Modulation Classification Scheme Based on LSTM With Random Erasing and Attention Mechanism [J].
Chen, Yufan ;
Shao, Wei ;
Liu, Jin ;
Yu, Lu ;
Qian, Zuping .
IEEE ACCESS, 2020, 8 :154290-154300
[6]   Hybrid Precoding for Multiuser Millimeter Wave Massive MIMO Systems: A Deep Learning Approach [J].
Elbir, Ahmet M. ;
Papazafeiropoulos, Anastasios K. .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (01) :552-563
[7]   CNN-Based Precoder and Combiner Design in mmWave MIMO Systems [J].
Elbir, Ahmet M. .
IEEE COMMUNICATIONS LETTERS, 2019, 23 (07) :1240-1243
[8]   State-of-the-Art Deep Learning: Evolving Machine Intelligence Toward Tomorrow's Intelligent Network Traffic Control Systems [J].
Fadlullah, Zubair Md. ;
Tang, Fengxiao ;
Mao, Bomin ;
Kato, Nei ;
Akashi, Osamu ;
Inoue, Takeru ;
Mizutani, Kimihiro .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2017, 19 (04) :2432-2455
[9]   Adaptive coding for time-varying channels using outdated fading estimates [J].
Goeckel, DL .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1999, 47 (06) :844-855
[10]   COMPRESSION AND ACCELERATION OF NEURAL NETWORKS FOR COMMUNICATIONS [J].
Guo, Jiajia ;
Wang, Jinghe ;
Wen, Chao-Kai ;
Jin, Shi ;
Li, Geoffrey Ye .
IEEE WIRELESS COMMUNICATIONS, 2020, 27 (04) :110-117