MR-Transformer: FPGA Accelerated Deep Learning Attention Model for Modulation Recognition

被引:0
作者
Wang, Haiyan [1 ]
Qi, Zhongzheng [1 ]
Li, Zan [1 ]
Zhao, Xiaohui [1 ]
机构
[1] Jilin Univ, Coll Commun Engn, Changchun 130012, Peoples R China
基金
中国国家自然科学基金;
关键词
Field programmable gate arrays; Feature extraction; Modulation; Transformers; Accuracy; Computational modeling; Computational efficiency; Parallel processing; Deep learning; Power demand; modulation recognition; FPGA; transformer; CLASSIFICATION;
D O I
10.1109/TWC.2024.3506743
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Modulation recognition has emerged as an intensive research topic to improve the communication efficiency in the future 6G network and plays an important role in the security of electromagnetic spectrum. Various pattern recognition methods have been proposed to enhance the performance of modulation recognition, especially deep learning models showing their emerging performance. In this work, we design a modulation recognition model based on an enhanced Transformer, namely MR-Transformer, which is accelerated on a Field Programmable Gate Array (FPGA). The design of MR-Transformer targets on high recognition accuracy, low power consumption, and high computation efficiency, which is suitable for modulation recognition at edge devices. MR-Transformer leverages attention mechanism to extract global features and correspondingly enhance the recognition accuracy. An improved matrix multiplication operation and enhanced Design Space Exploration (DSE) method are proposed in MR-Transformer to improve the computation efficiency and reduce resource consumption. We conduct comprehensive experiments to evaluate the performance of MR-Transformer on three platforms, i.e. Central Processing Unit (CPU), Graphics Processing Unit (GPU), and FPGA, based on two open-source datasets. According to the evaluation results, the MR-Transformer based on FPGA shows the best performance compared with the baseline models considering accuracy, power consumption, and computation efficiency.
引用
收藏
页码:1221 / 1233
页数:13
相关论文
共 34 条
[1]   Massive MIMO Adaptive Modulation and Coding Using Online Deep Learning Algorithm [J].
Bobrov, Evgeny ;
Kropotov, Dmitry ;
Lu, Hao ;
Zaev, Danila .
IEEE COMMUNICATIONS LETTERS, 2022, 26 (04) :818-822
[2]   Signal Augmentations Oriented to Modulation Recognition in the Realistic Scenarios [J].
Dong, Ganggang ;
Liu, Hongwei .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (03) :1665-1677
[3]   ELSA: Hardware-Software Co-design for Efficient, Lightweight Self-Attention Mechanism in Neural Networks [J].
Ham, Tae Jun ;
Lee, Yejin ;
Seo, Seong Hoon ;
Kim, Soosung ;
Choi, Hyunji ;
Jung, Sung Jun ;
Lee, Jae W. .
2021 ACM/IEEE 48TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA 2021), 2021, :692-705
[4]  
He HT, 2018, IEEE GLOB CONF SIG, P584, DOI 10.1109/GlobalSIP.2018.8646357
[5]   LIKELIHOOD METHODS FOR MPSK MODULATION CLASSIFICATION [J].
HUANG, CY ;
POLYDOROS, A .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (2-4) :1493-1504
[6]   Face ShapeNets for 3D Face Recognition [J].
Jabberi, Marwa ;
Wali, Ali ;
Neji, Bilel ;
Beyrouthy, Taha ;
Alimi, Adel M. .
IEEE ACCESS, 2023, 11 :46240-46256
[7]  
Jiao JY, 2021, CHINA COMMUN, V18, P1, DOI 10.23919/JCC.2021.12.001
[8]   Efficient CNN Accelerator on FPGA [J].
Kala, S. ;
Nalesh, S. .
IETE JOURNAL OF RESEARCH, 2020, 66 (06) :733-740
[9]  
Kim B, 2016, I C INF COMM TECH CO, P579, DOI 10.1109/ICTC.2016.7763537
[10]   Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization [J].
Li, Zhengang ;
Sun, Mengshu ;
Lu, Alec ;
Ma, Haoyu ;
Yuan, Geng ;
Xie, Yanyue ;
Tang, Hao ;
Li, Yanyu ;
Leeser, Miriam ;
Wang, Zhangyang ;
Lin, Xue ;
Fang, Zhenman .
2022 32ND INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS, FPL, 2022, :109-116