Detection Tolerant Black-Box Adversarial Attack Against Automatic Modulation Classification With Deep Learning

被引:21
|
作者
Qi, Peihan [1 ]
Jiang, Tao [2 ]
Wang, Lizhan [3 ]
Yuan, Xu [4 ]
Li, Zan [1 ]
机构
[1] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[2] Xidian Univ, Sch Cyber Engn, Xian 710071, Peoples R China
[3] Xidian Univ, Guangzhou Inst Technol, Guangzhou 510555, Peoples R China
[4] Univ Louisiana Lafayette, Sch Comp & Informat, Lafayette, LA 70504 USA
基金
中国国家自然科学基金;
关键词
Computational modeling; Modulation; Data models; Perturbation methods; Training; Security; Reliability; Adversarial examples; automatic modulation classification (AMC); black-box attack; deep learning (DL);
D O I
10.1109/TR.2022.3161138
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Advances in adversarial attack and defense technologies will enhance the reliability of deep learning (DL) systems spirally. Most existing adversarial attack methods make overly ideal assumptions, which creates the illusion that the DL system can be attacked simply and has restricted the further improvement on DL systems. To perform practical adversarial attacks, a detection tolerant black-box adversarial-attack (DTBA) method against DL-based automatic modulation classification (AMC) is presented in this article. In the DTBA method, the local DL model as a substitution of the remote target DL model is trained first. The training dataset is generated by an attacker, labeled by the target model, and augmented by Jacobian transformation. Then, the conventional gradient attack method is utilized to generate adversarial attack examples toward the local DL model. Moreover, before launching attack to the target model, the local model estimates the misclassification probability of the perturbed examples in advance and deletes those invalid adversarial examples. Compared with related attack methods of different criteria on public datasets, the DTBA method can reduce the attack cost while increasing the rate of successful attack. Adversarial attack transferability of the proposed method on the target model has increased by more than 20%. The DTBA method will be suitable for launching flexible and effective black-box adversarial attacks against DL-based AMC systems.
引用
收藏
页码:674 / 686
页数:13
相关论文
共 50 条
  • [1] Generalizable Black-Box Adversarial Attack With Meta Learning
    Yin, Fei
    Zhang, Yong
    Wu, Baoyuan
    Feng, Yan
    Zhang, Jingyi
    Fan, Yanbo
    Yang, Yujiu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1804 - 1818
  • [2] A Black-Box Adversarial Attack via Deep Reinforcement Learning on the Feature Space
    Li, Lyue
    Rezapour, Amir
    Tzeng, Wen-Guey
    2021 IEEE CONFERENCE ON DEPENDABLE AND SECURE COMPUTING (DSC), 2021,
  • [3] Ensemble adversarial black-box attacks against deep learning systems
    Hang, Jie
    Han, Keji
    Chen, Hui
    Li, Yun
    PATTERN RECOGNITION, 2020, 101
  • [4] Restricted Black-Box Adversarial Attack Against DeepFake Face Swapping
    Dong, Junhao
    Wang, Yuan
    Lai, Jianhuang
    Xie, Xiaohua
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 2596 - 2608
  • [5] DeeBBAA: A Benchmark Deep Black-Box Adversarial Attack Against CyberPhysical Power Systems
    Bhattacharjee, Arnab
    Bai, Guangdong
    Tushar, Wayes
    Verma, Ashu
    Mishra, Sukumar
    Saha, Tapan K.
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 40670 - 40688
  • [6] Substitute Meta-Learning for Black-Box Adversarial Attack
    Hu, Cong
    Xu, Hao-Qi
    Wu, Xiao-Jun
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2472 - 2476
  • [7] Black-box Bayesian adversarial attack with transferable priors
    Zhang, Shudong
    Gao, Haichang
    Shu, Chao
    Cao, Xiwen
    Zhou, Yunyi
    He, Jianping
    MACHINE LEARNING, 2024, 113 (04) : 1511 - 1528
  • [8] Black-box Bayesian adversarial attack with transferable priors
    Shudong Zhang
    Haichang Gao
    Chao Shu
    Xiwen Cao
    Yunyi Zhou
    Jianping He
    Machine Learning, 2024, 113 : 1511 - 1528
  • [9] SIMULATOR ATTACK plus FOR BLACK-BOX ADVERSARIAL ATTACK
    Ji, Yimu
    Ding, Jianyu
    Chen, Zhiyu
    Wu, Fei
    Zhang, Chi
    Sun, Yiming
    Sun, Jing
    Liu, Shangdong
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 636 - 640
  • [10] Targeted Black-Box Adversarial Attack Method for Image Classification Models
    Zheng, Su
    Chen, Jialin
    Wang, Lingli
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,