Sparse Adversarial Attacks against DL-Based Automatic Modulation Classification

被引:0
|
作者
Jiang, Zenghui [1 ]
Zeng, Weijun [1 ]
Zhou, Xingyu [1 ]
Feng, Peilun [1 ]
Chen, Pu [1 ]
Yin, Shenqian [1 ]
Han, Changzhi [1 ]
Li, Lin [1 ]
机构
[1] Army Engn Univ PLA, Coll Commun Engn, Nanjing 210007, Peoples R China
基金
中国国家自然科学基金;
关键词
modulation recognition; wireless security; deep learning; neural networks; adversarial attack; adversarial examples;
D O I
10.3390/electronics12183752
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automatic modulation recognition (AMR) serves as a crucial component in domains such as cognitive radio and electromagnetic countermeasures, acting as a significant prerequisite for the efficient signal processing of receivers. Deep neural networks (DNNs), despite their effectiveness, are known to be vulnerable to adversarial attacks. This vulnerability has inspired the introduction of subtle interference to wireless communication signals-interference so minuscule that it is difficult for the human eye to discern. Such interference can mislead eavesdroppers into erroneous modulation pattern recognition when using DNNs, thereby camouflaging communication signal modulation patterns. Nonetheless, the majority of current camouflage methods used for electromagnetic signal modulation recognition rely on a global perturbation of the signal. They fail to consider the local agility of signal disturbance and the concealment requirements for bait signals that are intercepted by the interceptor. This paper presents a generator framework designed to produce perturbations with sparse properties. Furthermore, we introduce a method to reduce spectral loss, which minimizes the spectral difference between adversarial perturbation and the original signal. This method makes perturbation more challenging to monitor, thereby deceiving enemy electromagnetic signal modulation recognition systems. The experimental results validated that the proposed method significantly outperformed existing methods in terms of generation time. Moreover, it can generate adversarial signals characterized by high deceivability and transferability even under extremely sparse conditions.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Threat of Adversarial Attacks on DL-Based IoT Device Identification
    Bao, Zhida
    Lin, Yun
    Zhang, Sicheng
    Li, Zixin
    Mao, Shiwen
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (11) : 9012 - 9024
  • [2] HFAD: Homomorphic Filtering Adversarial Defense Against Adversarial Attacks in Automatic Modulation Classification
    Zhang, Sicheng
    Lin, Yun
    Yu, Jiarun
    Zhang, Jianting
    Xuan, Qi
    Xu, Dongwei
    Wang, Juzhen
    Wang, Meiyu
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (03) : 880 - 892
  • [3] Robust Automatic Modulation Classification in the Presence of Adversarial Attacks
    Sahay, Rajeev
    Love, David J.
    Brinton, Christopher G.
    2021 55TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2021,
  • [4] Realism versus Performance for Adversarial Examples Against DL-based NIDS
    Alatwi, Huda Ali
    Morisset, Charles
    38TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2023, 2023, : 1549 - 1557
  • [5] Mixture GAN For Modulation Classification Resiliency Against Adversarial Attacks
    Shtaiwi, Eyad
    El Ouadrhiri, Ahmed
    Moradikia, Majid
    Sultana, Salma
    Abdelhadi, Ahmed
    Han, Zhu
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1472 - 1477
  • [6] Defending AI-Based Automatic Modulation Recognition Models Against Adversarial Attacks
    Tang, Haolin
    Catak, Ferhat Ozgur
    Kuzlu, Murat
    Catak, Evren
    Zhao, Yanxiao
    IEEE ACCESS, 2023, 11 : 76629 - 76637
  • [7] Frequency-Constrained Iterative Adversarial Attacks for Automatic Modulation Classification
    Chen, Yigong
    Qiao, Xiaoqiang
    Zhang, Jiang
    Zhang, Tao
    Du, Yihang
    IEEE COMMUNICATIONS LETTERS, 2024, 28 (12) : 2734 - 2738
  • [8] Bias Busters: Robustifying DL-Based Lithographic Hotspot Detectors Against Backdooring Attacks
    Liu, Kang
    Tan, Benjamin
    Reddy, Gaurav Rajavendra
    Garg, Siddharth
    Makris, Yiorgos
    Karri, Ramesh
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2021, 40 (10) : 2077 - 2089
  • [9] Transferable Adversarial Attacks against Automatic Modulation Classifier in Wireless Communications
    Hu, Lin
    Jiang, Han
    Li, Wen
    Han, Hao
    Yang, Yang
    Jiao, Yutao
    Wang, Haichao
    Xu, Yuhua
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [10] Adversarial attack on DL-based massive MIMO CSI feedback
    Liu, Qing
    Guo, Jiajia
    Wen, Chao-Kai
    Jin, Shi
    JOURNAL OF COMMUNICATIONS AND NETWORKS, 2020, 22 (03) : 230 - 235