Detection Tolerant Black-Box Adversarial Attack Against Automatic Modulation Classification With Deep Learning

被引:21
作者
Qi, Peihan [1 ]
Jiang, Tao [2 ]
Wang, Lizhan [3 ]
Yuan, Xu [4 ]
Li, Zan [1 ]
机构
[1] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[2] Xidian Univ, Sch Cyber Engn, Xian 710071, Peoples R China
[3] Xidian Univ, Guangzhou Inst Technol, Guangzhou 510555, Peoples R China
[4] Univ Louisiana Lafayette, Sch Comp & Informat, Lafayette, LA 70504 USA
基金
中国国家自然科学基金;
关键词
Computational modeling; Modulation; Data models; Perturbation methods; Training; Security; Reliability; Adversarial examples; automatic modulation classification (AMC); black-box attack; deep learning (DL);
D O I
10.1109/TR.2022.3161138
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Advances in adversarial attack and defense technologies will enhance the reliability of deep learning (DL) systems spirally. Most existing adversarial attack methods make overly ideal assumptions, which creates the illusion that the DL system can be attacked simply and has restricted the further improvement on DL systems. To perform practical adversarial attacks, a detection tolerant black-box adversarial-attack (DTBA) method against DL-based automatic modulation classification (AMC) is presented in this article. In the DTBA method, the local DL model as a substitution of the remote target DL model is trained first. The training dataset is generated by an attacker, labeled by the target model, and augmented by Jacobian transformation. Then, the conventional gradient attack method is utilized to generate adversarial attack examples toward the local DL model. Moreover, before launching attack to the target model, the local model estimates the misclassification probability of the perturbed examples in advance and deletes those invalid adversarial examples. Compared with related attack methods of different criteria on public datasets, the DTBA method can reduce the attack cost while increasing the rate of successful attack. Adversarial attack transferability of the proposed method on the target model has increased by more than 20%. The DTBA method will be suitable for launching flexible and effective black-box adversarial attacks against DL-based AMC systems.
引用
收藏
页码:674 / 686
页数:13
相关论文
共 50 条
  • [41] Local Black-box Adversarial Attack based on Random Segmentation Channel
    Xu, Li
    Yang, Zejin
    Guo, Huiting
    Wan, Xu
    Fan, Chunlong
    [J]. PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 1437 - 1442
  • [42] An adversarial attack on DNN-based black-box object detectors
    Wang, Yajie
    Tan, Yu-an
    Zhang, Wenjiao
    Zhao, Yuhang
    Kuang, Xiaohui
    [J]. JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2020, 161
  • [43] Iterative Training Attack: A Black-Box Adversarial Attack via Perturbation Generative Network
    Lei, Hong
    Jiang, Wei
    Zhan, Jinyu
    You, Shen
    Jin, Lingxin
    Xie, Xiaona
    Chang, Zhengwei
    [J]. JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2023, 32 (18)
  • [44] Attention-guided transformation-invariant attack for black-box adversarial examples
    Zhu, Jiaqi
    Dai, Feng
    Yu, Lingyun
    Xie, Hongtao
    Wang, Lidong
    Wu, Bo
    Zhang, Yongdong
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (05) : 3142 - 3165
  • [45] Sparse Black-Box Video Attack with Reinforcement Learning
    Xingxing Wei
    Huanqian Yan
    Bo Li
    [J]. International Journal of Computer Vision, 2022, 130 : 1459 - 1473
  • [46] Sparse Black-Box Video Attack with Reinforcement Learning
    Wei, Xingxing
    Yan, Huanqian
    Li, Bo
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2022, 130 (06) : 1459 - 1473
  • [47] An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers
    Zhou, Yutian
    Tan, Yu-an
    Zhang, Quanxin
    Kuang, Xiaohui
    Han, Yahong
    Hu, Jingjing
    [J]. MOBILE NETWORKS & APPLICATIONS, 2021, 26 (04) : 1616 - 1629
  • [48] Boosting Black-Box Attack to Deep Neural Networks With Conditional Diffusion Models
    Liu, Renyang
    Zhou, Wei
    Zhang, Tianwei
    Chen, Kangjie
    Zhao, Jun
    Lam, Kwok-Yan
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 5207 - 5219
  • [49] An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers
    Yutian Zhou
    Yu-an Tan
    Quanxin Zhang
    Xiaohui Kuang
    Yahong Han
    Jingjing Hu
    [J]. Mobile Networks and Applications, 2021, 26 : 1616 - 1629
  • [50] Self-Taught Black-Box Adversarial Attack to Multilayer Network Automation
    Pan, Xiaoqin
    Zhu, Zuqing
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 2134 - 2139