Channel Aware Adversarial Attacks are Not Robust

被引:1
|
作者
Sinha, Sujata [1 ]
Soysal, Alkan [1 ]
机构
[1] Virginia Tech, Dept Elect & Comp Engn, Wireless VT, Blacksburg, VA 24061 USA
来源
MILCOM 2023 - 2023 IEEE MILITARY COMMUNICATIONS CONFERENCE | 2023年
关键词
D O I
10.1109/MILCOM58377.2023.10356294
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial Machine Learning (AML) has shown significant success when applied to deep learning models across various domains. This paper explores channel-aware adversarial attacks on DNN-based modulation classification models within wireless environments. Our investigation focuses on the robustness of these attacks with respect to channel distribution and path-loss parameters. We examine two scenarios: one in which the attacker has instantaneous channel knowledge and another in which the attacker relies on statistical channel data. In both cases, we study channels subject to Rayleigh fading alone, Rayleigh fading combined with shadowing, and Rayleigh fading combined with both shadowing and path loss. Our findings reveal that the distance between the attacker and the legitimate receiver largely dictates the success of an AML attack. Without precise distance estimation, adversarial attacks are likely to fail.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Fairness-Aware Regression Robust to Adversarial Attacks
    Jin, Yulu
    Lai, Lifeng
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 : 4092 - 4105
  • [2] ROBUSTNESS-AWARE FILTER PRUNING FOR ROBUST NEURAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Lim, Hyuntak
    Roh, Si-Dong
    Park, Sangki
    Chung, Ki-Seok
    2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
  • [3] ROBUST DETECTION OF ADVERSARIAL ATTACKS ON MEDICAL IMAGES
    Li, Xin
    Zhu, Dongxiao
    2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020), 2020, : 1154 - 1158
  • [4] Robust Trajectory Prediction against Adversarial Attacks
    Cao, Yulong
    Xu, Danfei
    Weng, Xinshuo
    Mao, Z. Morley
    Anandkumar, Anima
    Xiao, Chaowei
    Pavone, Marco
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 128 - 137
  • [5] Stochastic Linear Bandits Robust to Adversarial Attacks
    Bogunovic, Ilija
    Losalka, Arpan
    Krause, Andreas
    Scarlett, Jonathan
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [6] Are Generative Classifiers More Robust to Adversarial Attacks?
    Li, Yingzhen
    Bradshaw, John
    Sharma, Yash
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [7] Feature Importance-aware Transferable Adversarial Attacks
    Wang, Zhibo
    Guo, Hengchang
    Zhang, Zhifei
    Liu, Wenxin
    Qin, Zhan
    Ren, Kui
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7619 - 7628
  • [8] Maximum Mean Discrepancy Test is Aware of Adversarial Attacks
    Gao, Ruize
    Liu, Feng
    Zhang, Jingfeng
    Han, Bo
    Liu, Tongliang
    Niu, Gang
    Sugiyama, Masashi
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [9] Channel-Aware Adversarial Attacks Against Deep Learning-Based Wireless Signal Classifiers
    Kim, Brian
    Sagduyu, Yalin E.
    Davaslioglu, Kemal
    Erpek, Tugba
    Ulukus, Sennur
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (06) : 3868 - 3880
  • [10] Robust Automatic Modulation Classification in the Presence of Adversarial Attacks
    Sahay, Rajeev
    Love, David J.
    Brinton, Christopher G.
    2021 55TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2021,