Detection Tolerant Black-Box Adversarial Attack Against Automatic Modulation Classification With Deep Learning

被引:27
作者
Qi, Peihan [1 ]
Jiang, Tao [2 ]
Wang, Lizhan [3 ]
Yuan, Xu [4 ]
Li, Zan [1 ]
机构
[1] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[2] Xidian Univ, Sch Cyber Engn, Xian 710071, Peoples R China
[3] Xidian Univ, Guangzhou Inst Technol, Guangzhou 510555, Peoples R China
[4] Univ Louisiana Lafayette, Sch Comp & Informat, Lafayette, LA 70504 USA
基金
中国国家自然科学基金;
关键词
Computational modeling; Modulation; Data models; Perturbation methods; Training; Security; Reliability; Adversarial examples; automatic modulation classification (AMC); black-box attack; deep learning (DL);
D O I
10.1109/TR.2022.3161138
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Advances in adversarial attack and defense technologies will enhance the reliability of deep learning (DL) systems spirally. Most existing adversarial attack methods make overly ideal assumptions, which creates the illusion that the DL system can be attacked simply and has restricted the further improvement on DL systems. To perform practical adversarial attacks, a detection tolerant black-box adversarial-attack (DTBA) method against DL-based automatic modulation classification (AMC) is presented in this article. In the DTBA method, the local DL model as a substitution of the remote target DL model is trained first. The training dataset is generated by an attacker, labeled by the target model, and augmented by Jacobian transformation. Then, the conventional gradient attack method is utilized to generate adversarial attack examples toward the local DL model. Moreover, before launching attack to the target model, the local model estimates the misclassification probability of the perturbed examples in advance and deletes those invalid adversarial examples. Compared with related attack methods of different criteria on public datasets, the DTBA method can reduce the attack cost while increasing the rate of successful attack. Adversarial attack transferability of the proposed method on the target model has increased by more than 20%. The DTBA method will be suitable for launching flexible and effective black-box adversarial attacks against DL-based AMC systems.
引用
收藏
页码:674 / 686
页数:13
相关论文
共 50 条
[1]  
[Anonymous], 2019, Fast deep learning for automatic modulation classification
[2]  
[Anonymous], 2018, ARTIFICIAL INTELLIGE, DOI DOI 10.1201/9781351251389-8
[3]  
Athalye A, 2018, PR MACH LEARN RES, V80
[4]  
Azzouz E., 2013, Automatic modulation recognition of communication signals
[5]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[6]   Proactive Resource Management for LTE in Unlicensed Spectrum: A Deep Learning Perspective [J].
Challita, Ursula ;
Dong, Li ;
Saad, Walid .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2018, 17 (07) :4674-4689
[7]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448
[8]  
Chen Y., 2020, INT J PERFORMABILITY, V16, P1985
[9]  
DeepSig, 2016, Deepsig dataset: Radioml 2016.10b
[10]   Survey of automatic modulation classification techniques: classical approaches and new trends [J].
Dobre, O. A. ;
Abdi, A. ;
Bar-Ness, Y. ;
Su, W. .
IET COMMUNICATIONS, 2007, 1 (02) :137-156