Channel Aware Adversarial Attacks are Not Robust

被引:1
作者
Sinha, Sujata [1 ]
Soysal, Alkan [1 ]
机构
[1] Virginia Tech, Dept Elect & Comp Engn, Wireless VT, Blacksburg, VA 24061 USA
来源
MILCOM 2023 - 2023 IEEE MILITARY COMMUNICATIONS CONFERENCE | 2023年
关键词
D O I
10.1109/MILCOM58377.2023.10356294
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial Machine Learning (AML) has shown significant success when applied to deep learning models across various domains. This paper explores channel-aware adversarial attacks on DNN-based modulation classification models within wireless environments. Our investigation focuses on the robustness of these attacks with respect to channel distribution and path-loss parameters. We examine two scenarios: one in which the attacker has instantaneous channel knowledge and another in which the attacker relies on statistical channel data. In both cases, we study channels subject to Rayleigh fading alone, Rayleigh fading combined with shadowing, and Rayleigh fading combined with both shadowing and path loss. Our findings reveal that the distance between the attacker and the legitimate receiver largely dictates the success of an AML attack. Without precise distance estimation, adversarial attacks are likely to fail.
引用
收藏
页数:6
相关论文
共 50 条
[21]   Towards Semantics- and Domain-Aware Adversarial Attacks [J].
Zhang, Jianping ;
Huang, Yung-Chieh ;
Wu, Weibin ;
Lyu, Michael R. .
PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, :536-544
[22]   Uncertainty-Aware Opinion Inference Under Adversarial Attacks [J].
Alim, Adil ;
Zhao, Xujiang ;
Cho, Jin-Hee ;
Chen, Feng .
2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, :6-15
[23]   Semantic-Aware Multi-Label Adversarial Attacks [J].
Mahmood, Hassan ;
El Hamifar, Ehsan .
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, :24251-24262
[24]   Boosting the Transferability of Adversarial Attacks With Frequency-Aware Perturbation [J].
Wang, Yajie ;
Wu, Yi ;
Wu, Shangbo ;
Liu, Ximeng ;
Zhou, Wanlei ;
Zhu, Liehuang ;
Zhang, Chuan .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 :6293-6304
[25]   Secure Communication Under Channel Uncertainty and Adversarial Attacks [J].
Schaefer, Rafael F. ;
Boche, Holger ;
Poor, H. Vincent .
PROCEEDINGS OF THE IEEE, 2015, 103 (10) :1796-1813
[26]   Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training [J].
Tian, Hu ;
Ye, Bowei ;
Zheng, Xiaolong ;
Wu, Desheng Dash .
IFAC PAPERSONLINE, 2020, 53 (05) :420-425
[27]   Blind Adversarial Training: Towards Comprehensively Robust Models Against Blind Adversarial Attacks [J].
Xie, Haidong ;
Xiang, Xueshuang ;
Dong, Bin ;
Liu, Naijin .
ARTIFICIAL INTELLIGENCE, CICAI 2023, PT II, 2024, 14474 :15-26
[28]   Adversarial Defense on Harmony: Reverse Attack for Robust AI Models Against Adversarial Attacks [J].
Kim, Yebon ;
Jung, Jinhyo ;
Kim, Hyunjun ;
So, Hwisoo ;
Ko, Yohan ;
Shrivastava, Aviral ;
Lee, Kyoungwoo ;
Hwang, Uiwon .
IEEE ACCESS, 2024, 12 :176485-176497
[29]   A Robust Approach for Securing Audio Classification Against Adversarial Attacks [J].
Esmaeilpour, Mohammad ;
Cardinal, Patrick ;
Koerich, Alessandro .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 :2147-2159
[30]   Exploring misclassifications of robust neural networks to enhance adversarial attacks [J].
Leo Schwinn ;
René Raab ;
An Nguyen ;
Dario Zanca ;
Bjoern Eskofier .
Applied Intelligence, 2023, 53 :19843-19859