Robust and Efficient Modulation Recognition with Pyramid Signal Transformer

被引:8
|
作者
Su, He [1 ]
Fan, Xinyi [1 ]
Liu, Huajun [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing, Peoples R China
来源
2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022) | 2022年
关键词
Modulation recognition; Noise resistance learning; Dual-attention; Transformer; CLASSIFICATION;
D O I
10.1109/GLOBECOM48099.2022.10001593
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A robust and efficient pyramid signal Transformer model, called SigFormer for automatic modulation recognition was proposed in this paper. In SigFormer, a pyramid Transformer architecture is introduced to encode the relationship between the internal features of modulated signals. Specifically, a dual-attention block composed of self-attention layer and scaling-attention layer is proposed for simultaneous global feature representation and noise resistance learning for modulated signals, and small-kernel convolution layers embedded to dual-attention block and feed-forward block is proposed for fine-grained modulation recognition as well. Experiments on RML2018.01a, RML2016.10a and RML2016.10b show that the SigFormer outperformed most other deep learning models on recognition accuracy, and it is more parameter-efficient than most other models and more robust on low signal-to-noise ratio (SNR) signals.
引用
收藏
页码:1868 / 1874
页数:7
相关论文
共 50 条
  • [1] Pyramid transformer-based triplet hashing for robust visual place recognition
    Li, Zhenyu
    Xu, Pengjie
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 249
  • [2] A Unified Efficient Pyramid Transformer for Semantic Segmentation
    Zhu, Fangrui
    Zhu, Yi
    Zhang, Li
    Wu, Chongruo
    Fu, Yanwei
    Li, Mu
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 2667 - 2677
  • [3] Efficient convolutional dual-attention transformer for automatic modulation recognition
    Yi, Zengrui
    Meng, Hua
    Gao, Lu
    He, Zhonghang
    Yang, Meng
    APPLIED INTELLIGENCE, 2025, 55 (03)
  • [4] CFCS: A Robust and Efficient Collaboration Framework for Automatic Modulation Recognition
    Shi J.
    Yang X.
    Ma J.
    Yue G.
    Journal of Communications and Information Networks, 2023, 8 (03) : 283 - 294
  • [5] Efficient Hardware Design of DNN for RF Signal Modulation Recognition
    Woo, Jongseok
    Jung, Kuchul
    Mukhopadhyay, Saibal
    2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024, 2024,
  • [6] EAPT: Efficient Attention Pyramid Transformer for Image Processing
    Lin, Xiao
    Sun, Shuzhou
    Huang, Wei
    Sheng, Bin
    Li, Ping
    Feng, David Dagan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 50 - 61
  • [7] Robust and Efficient Modulation Recognition Based on Local Sequential IQ Features
    Xiong, Wei
    Bogdanov, Petko
    Zheleva, Mariya
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2019), 2019, : 1612 - 1620
  • [8] AbFTNet: An Efficient Transformer Network with Alignment before Fusion for Multimodal Automatic Modulation Recognition
    Ning, Meng
    Zhou, Fan
    Wang, Wei
    Wang, Shaoqiang
    Zhang, Peiying
    Wang, Jian
    ELECTRONICS, 2024, 13 (18)
  • [9] An Efficient Spatio-Temporal Pyramid Transformer for Action Detection
    Weng, Yuetian
    Pan, Zizheng
    Han, Mingfei
    Chang, Xiaojun
    Zhuang, Bohan
    COMPUTER VISION, ECCV 2022, PT XXXIV, 2022, 13694 : 358 - 375
  • [10] CGDNet: Efficient hybrid deep learning model for robust automatic modulation recognition
    Njoku, Judith Nkechinyere
    Morocho-Cayamcela, Manuel Eugenio
    Lim, Wansu
    IEEE Networking Letters, 2021, 3 (02): : 47 - 51