Efficient convolutional dual-attention transformer for automatic modulation recognition

被引:0
|
作者
Yi, Zengrui [1 ]
Meng, Hua [1 ]
Gao, Lu [2 ]
He, Zhonghang [2 ]
Yang, Meng [1 ]
机构
[1] Southwest Jiaotong Univ, Sch Math, Chengdu 611756, Sichuan, Peoples R China
[2] Natl Key Lab Sci & Technol Test Phys & Numeral Mat, Beijing, Peoples R China
关键词
Efficient modulation recognition; Lightweight convolution; Dual-attention mechanism; Transformer; CLASSIFICATION;
D O I
10.1007/s10489-024-06202-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatic modulation recognition (AMR) involves identifying the modulation of electromagnetic signals in a noncollaborative manner. Deep learning-based methods have become a focused research topic in the AMR field. Such models are frequently trained using standardized data, relying on many computational and storage resources. However, in real-world applications, the finite resources of edge devices limit the deployment of large-scale models. In addition, traditional networks cannot handle real-world signals of varying lengths and local missing data. Thus, we propose a network structure based on a convolutional Transformer with a dual-attention mechanism. This proposed structure effectively utilizes the inductive bias of the lightweight convolution and the global property of the Transformer model, thereby fusing local features with global features to get high recognition accuracy. Moreover, the model can adapt to the length of the input signals while maintaining strong robustness against incomplete signals. Experimental results on the open-source datasets RML2016.10a, RML2016.10b, and RML2018.01a demonstrate that the proposed network structure can achieve 95.05%, 94.79%, and 98.14% accuracy, respectively, with enhancement training and maintain greater than 90% accuracy when the signals are incomplete. In addition, the proposed network structure has fewer parameters and lower computational cost than benchmark methods.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] A Dual-Attention Transformer Network for Pansharpening
    Wu, Kun
    Yang, Xiaomin
    Nie, Zihao
    Li, Haoran
    Jeon, Gwanggil
    IEEE SENSORS JOURNAL, 2024, 24 (05) : 5500 - 5511
  • [2] Efficient Underground Tunnel Place Recognition Algorithm Based on Farthest Point Subsampling and Dual-Attention Transformer
    Chai, Xinghua
    Yang, Jianyong
    Yan, Xiangming
    Di, Chengliang
    Ye, Tao
    SENSORS, 2023, 23 (22)
  • [3] HCTC: Hybrid Convolutional Transformer Classifier for Automatic Modulation Recognition
    Ruikar, Jayesh Deorao
    Park, Do-Hyun
    Kwon, Soon-Young
    Kim, Hyoung-Nam
    ELECTRONICS, 2024, 13 (19)
  • [4] Transformer tracking with multi-scale dual-attention
    Wang, Jun
    Lai, Changwang
    Zhang, Wenshuang
    Wang, Yuanyun
    Meng, Chenchen
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (05) : 5793 - 5806
  • [5] Transformer tracking with multi-scale dual-attention
    Jun Wang
    Changwang Lai
    Wenshuang Zhang
    Yuanyun Wang
    Chenchen Meng
    Complex & Intelligent Systems, 2023, 9 : 5793 - 5806
  • [6] ClST: A Convolutional Transformer Framework for Automatic Modulation Recognition by Knowledge Distillation
    Hou, Dongbin
    Li, Lixin
    Lin, Wensheng
    Liang, Junli
    Han, Zhu
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (07) : 8013 - 8028
  • [7] Dual-Attention Model Fusing CNN and Transformer for Pancreas Segmentation
    Zhu, Yan
    Hu, Peijun
    Tian, Yu
    Dong, Kaiqi
    Li, Jingsong
    MEDINFO 2023 - THE FUTURE IS ACCESSIBLE, 2024, 310 : 931 - 935
  • [8] DASR: Dual-Attention Transformer for infrared image super-resolution
    Liang, Shubo
    Song, Kechen
    Zhao, Wenli
    Li, Song
    Yan, Yunhui
    INFRARED PHYSICS & TECHNOLOGY, 2023, 133
  • [9] A Dual-Attention Autoencoder Network for Efficient Recommendation System
    Duan, Chao
    Sun, Jianwen
    Li, Kaiqi
    Li, Qing
    ELECTRONICS, 2021, 10 (13)
  • [10] Dual-attention Network for View-invariant Action Recognition
    Gedamu Alemu Kumie
    Maregu Assefa Habtie
    Tewodros Alemu Ayall
    Changjun Zhou
    Huawen Liu
    Abegaz Mohammed Seid
    Aiman Erbad
    Complex & Intelligent Systems, 2024, 10 : 305 - 321