Optimization of Communication Signal Adversarial Examples by Selectively Preserving Low-Frequency Components of Perturbations

被引:0
作者
Zhang, Yi [1 ,2 ]
Wang, Lulu [1 ]
Wang, Xiaolei [1 ]
Shi, Dianxi [1 ]
Bai, Jiajun [1 ]
机构
[1] Intelligent Game & Decis Lab, Beijing 100071, Peoples R China
[2] China Special Police Coll, Beijing 102211, Peoples R China
基金
中国国家自然科学基金;
关键词
communication signal; adversarial examples; low-frequency; perturbations;
D O I
10.3390/s24227110
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Achieving high attack success rate (ASR) with minimal perturbed distortion has consistently been a prominent and challenging research topic in the field of adversarial examples. In this paper, a novel method to optimize communication signal adversarial examples is proposed by focusing on low-frequency components of perturbations (LFCP). Observations on model attention towards DCT coefficients reveal the crucial role of LFCP within adversarial examples in altering the model's predictions. As a result, selectively preserving LFCP is established as the fundamental concept of the optimization strategy. By utilizing the binary search algorithm, which considers the inconsistency in the model's predictions as a constraint, LFCP can be effectively identified, and the aim of minimizing perturbed distortion while maintaining ASR can be achieved. Experimental results conducted on a publicly available dataset, six adversarial attacks and two DNN models, indicate that the proposed method not only significantly minimizes perturbed distortion for FGSM, BIM, PGD, and MI-FGSM but also achieves a modest improvement in ASR. Notably, even for DeepFool and BS-FGM, which introduce small perturbations and exhibit high ASRs, the proposed method can still deliver feasible performance.
引用
收藏
页数:16
相关论文
共 32 条
  • [1] On the Limitations of Targeted Adversarial Evasion Attacks Against Deep Learning Enabled Modulation Recognition
    Bair, Samuel
    DelVecchio, Matthew
    Flowers, Bryse
    Michaels, Alan J.
    Headley, William C.
    [J]. PROCEEDINGS OF THE 2019 ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING (WISEML '19), 2019, : 25 - 30
  • [2] Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet
    Chen, Sizhe
    He, Zhengbao
    Sun, Chengjin
    Yang, Jie
    Huang, Xiaolin
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (04) : 2188 - 2197
  • [3] Boosting Adversarial Attacks with Momentum
    Dong, Yinpeng
    Liao, Fangzhou
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    Hu, Xiaolin
    Li, Jianguo
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 9185 - 9193
  • [4] Robust and Generalized Physical Adversarial Attacks via Meta-GAN
    Feng, Weiwei
    Xu, Nanqing
    Zhang, Tianzhu
    Wu, Baoyuan
    Zhang, Yongdong
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 1112 - 1125
  • [5] Evaluating Adversarial Evasion Attacks in the Context of Wireless Communications
    Flowers, Bryse
    Buehrer, R. Michael
    Headley, William C.
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 (15) : 1102 - 1113
  • [6] Goodfellow IJ, 2015, Arxiv, DOI arXiv:1412.6572
  • [7] Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless Signal Classifiers
    Kim, Brian
    Sagduyu, Yalin E.
    Davaslioglu, Kemal
    Erpek, Tugba
    Ulukus, Sennur
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (06) : 3868 - 3880
  • [8] Kim H, 2020, arXiv
  • [9] Kurakin A, 2017, Arxiv, DOI arXiv:1607.02533
  • [10] Adversarial attacks and defenses in Speaker Recognition Systems: A survey
    Lan, Jiahe
    Zhang, Rui
    Yan, Zheng
    Wang, Jie
    Chen, Yu
    Hou, Ronghui
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 127