Countermeasures Against Adversarial Examples in Radio Signal Classification

被引:22
|
作者
Zhang, Lu [1 ]
Lambotharan, Sangarapillai [1 ]
Zheng, Gan [1 ]
AsSadhan, Basil [2 ]
Roli, Fabio [3 ]
机构
[1] Loughborough Univ, Wolfson Sch Mech Elect & Mfg Engn, Loughborough LE11 3TU, Leics, England
[2] King Saud Univ, Dept Comp Sci, Riyadh 11421, Saudi Arabia
[3] Univ Cagliari, Dept Elect & Elect Engn, I-09123 Cagliari, Italy
基金
英国工程与自然科学研究理事会;
关键词
Modulation; Perturbation methods; Receivers; Training; Smoothing methods; Radio transmitters; Noise measurement; Deep learning; adversarial examples; radio modulation classification; neural rejection; label smoothing;
D O I
10.1109/LWC.2021.3083099
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning algorithms have been shown to be powerful in many communication network design problems, including that in automatic modulation classification. However, they are vulnerable to carefully crafted attacks called adversarial examples. Hence, the reliance of wireless networks on deep learning algorithms poses a serious threat to the security and operation of wireless networks. In this letter, we propose for the first time a countermeasure against adversarial examples in modulation classification. Our countermeasure is based on a neural rejection technique, augmented by label smoothing and Gaussian noise injection, that allows to detect and reject adversarial examples with high accuracy. Our results demonstrate that the proposed countermeasure can protect deep-learning based modulation classification systems against adversarial examples.
引用
收藏
页码:1830 / 1834
页数:5
相关论文
共 50 条
  • [41] Evaluating Impact of Image Transformations on Adversarial Examples
    Tian, Pu
    Poreddy, Sathvik
    Danda, Charitha
    Gowrineni, Chihnita
    Wu, Yalong
    Liao, Weixian
    IEEE ACCESS, 2024, 12 : 186217 - 186228
  • [42] Dynamic and Diverse Transformations for Defending Against Adversarial Examples
    Chen, Yongkang
    Zhang, Ming
    Li, Jin
    Kuang, Xiaohui
    Zhang, Xuhong
    Zhang, Han
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 976 - 983
  • [43] On the Defense Against Adversarial Examples Beyond the Visible Spectrum
    Ortiz, Anthony
    Fuentes, Olac
    Rosario, Dalton
    Kiekintveld, Christopher
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 553 - 558
  • [44] Targeted Adversarial Examples Against RF Deep Classifiers
    Kokalj-Filipovic, Silvija
    Miller, Rob
    Morman, Joshua
    PROCEEDINGS OF THE 2019 ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING (WISEML '19), 2019, : 6 - 11
  • [45] DCAL: A New Method for Defending Against Adversarial Examples
    Lin, Xiaoyu
    Cao, Chunjie
    Wang, Longjuan
    Liu, Zhiyuan
    Li, Mengqian
    Ma, Haiying
    ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT II, 2022, 13339 : 38 - 50
  • [46] Hardening against adversarial examples with the smooth gradient method
    Alan Mosca
    George D. Magoulas
    Soft Computing, 2018, 22 : 3203 - 3213
  • [47] Robust Adversarial Attack Against Explainable Deep Classification Models Based on Adversarial Images With Different Patch Sizes and Perturbation Ratios
    Le, Thi-Thu-Huong
    Kang, Hyoeun
    Kim, Howon
    IEEE ACCESS, 2021, 9 : 133049 - 133061
  • [48] Digital Signal Modulation Classification With Data Augmentation Using Generative Adversarial Nets in Cognitive Radio Networks
    Tang, Bin
    Tu, Ya
    Zhang, Zhaoyue
    Lin, Yun
    IEEE ACCESS, 2018, 6 : 15713 - 15722
  • [49] WordRevert: Adversarial Examples Defence Method for Chinese Text Classification
    Xu, Enhui
    Zhang, Xiaolin
    Wang, Yongping
    Zhang, Shuai
    Lu, Lixin
    Xu, Li
    IEEE ACCESS, 2022, 10 : 28832 - 28841
  • [50] Adversarial Examples Generation Approach for Tendency Classification on Chinese Texts
    Wang W.-Q.
    Wang R.
    Wang L.-N.
    Tang B.-X.
    Ruan Jian Xue Bao/Journal of Software, 2019, 30 (08): : 2415 - 2427