A Hybrid Training-Time and Run-Time Defense Against Adversarial Attacks in Modulation Classification

被引:10
作者
Zhang, Lu [1 ,2 ]
Lambotharan, Sangarapillai [2 ]
Zheng, Gan [2 ]
Liao, Guisheng [1 ]
Demontis, Ambra [3 ]
Roli, Fabio [4 ]
机构
[1] Xidian Univ, Sch Elect Engn, Xian 710071, Peoples R China
[2] Loughborough Univ, Wolfson Sch Mech Elect & Mfg Engn, Loughborough LE11 3TU, Leics, England
[3] Univ Cagliari, Dept Elect & Elect Engn, I-09123 Cagliari, Italy
[4] Univ Genoa, Dept Informat Bioengn Robot & Syst Engn, I-16145 Genoa, Italy
基金
英国工程与自然科学研究理事会;
关键词
Training; Modulation; Perturbation methods; Smoothing methods; Support vector machines; Convolutional neural networks; Deep learning; DNNs; adversarial examples; projected gradient descent algorithm; adversarial training; label smoothing; neural rejection;
D O I
10.1109/LWC.2022.3159659
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Motivated by the superior performance of deep learning in many applications including computer vision and natural language processing, several recent studies have focused on applying deep neural network for devising future generations of wireless networks. However, several recent works have pointed out that imperceptible and carefully designed adversarial examples (attacks) can significantly deteriorate the classification accuracy. In this letter, we investigate a defense mechanism based on both training-time and run-time defense techniques for protecting machine learning-based radio signal (modulation) classification against adversarial attacks. The training-time defense consists of adversarial training and label smoothing, while the run-time defense employs a support vector machine-based neural rejection (NR). Considering a white-box scenario and real datasets, we demonstrate that our proposed techniques outperform existing state-of-the-art technologies.
引用
收藏
页码:1161 / 1165
页数:5
相关论文
共 14 条
  • [1] [Anonymous], 2019, Fast deep learning for automatic modulation classification
  • [2] Cheng M., 2020, ARXIV200206789
  • [3] Goodfellow I.J., 2015, CoRR
  • [4] Kokalj-Filipovic S., 2019, Proc. of the IEEE Global Conference on Signal and Information Processing (GlobalSIP), P1
  • [5] Madry A., 2018, Towards deep learning models resistant to adversarial attacks, P1
  • [6] Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
    Melis, Marco
    Demontis, Ambra
    Biggio, Battista
    Brown, Gavin
    Fumera, Giorgio
    Roli, Fabio
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 751 - 759
  • [7] O'Shea T.J., 2016, P GNU RAD C, P1
  • [8] Over-the-Air Deep Learning Based Radio Signal Classification
    O'Shea, Timothy James
    Roy, Tamoghna
    Clancy, T. Charles
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2018, 12 (01) : 168 - 179
  • [9] Adversarial Attacks on Deep-Learning Based Radio Signal Classification
    Sadeghi, Meysam
    Larsson, Erik G.
    [J]. IEEE WIRELESS COMMUNICATIONS LETTERS, 2019, 8 (01) : 213 - 216
  • [10] SAHA R, 2021, 2021 6 INT C CONVERG, P1