Countermeasures Against Adversarial Examples in Radio Signal Classification

被引:22
|
作者
Zhang, Lu [1 ]
Lambotharan, Sangarapillai [1 ]
Zheng, Gan [1 ]
AsSadhan, Basil [2 ]
Roli, Fabio [3 ]
机构
[1] Loughborough Univ, Wolfson Sch Mech Elect & Mfg Engn, Loughborough LE11 3TU, Leics, England
[2] King Saud Univ, Dept Comp Sci, Riyadh 11421, Saudi Arabia
[3] Univ Cagliari, Dept Elect & Elect Engn, I-09123 Cagliari, Italy
基金
英国工程与自然科学研究理事会;
关键词
Modulation; Perturbation methods; Receivers; Training; Smoothing methods; Radio transmitters; Noise measurement; Deep learning; adversarial examples; radio modulation classification; neural rejection; label smoothing;
D O I
10.1109/LWC.2021.3083099
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning algorithms have been shown to be powerful in many communication network design problems, including that in automatic modulation classification. However, they are vulnerable to carefully crafted attacks called adversarial examples. Hence, the reliance of wireless networks on deep learning algorithms poses a serious threat to the security and operation of wireless networks. In this letter, we propose for the first time a countermeasure against adversarial examples in modulation classification. Our countermeasure is based on a neural rejection technique, augmented by label smoothing and Gaussian noise injection, that allows to detect and reject adversarial examples with high accuracy. Our results demonstrate that the proposed countermeasure can protect deep-learning based modulation classification systems against adversarial examples.
引用
收藏
页码:1830 / 1834
页数:5
相关论文
共 50 条
  • [31] Deep neural rejection against adversarial examples
    Angelo Sotgiu
    Ambra Demontis
    Marco Melis
    Battista Biggio
    Giorgio Fumera
    Xiaoyi Feng
    Fabio Roli
    EURASIP Journal on Information Security, 2020
  • [32] Toward Invisible Adversarial Examples Against DNN-Based Privacy Leakage for Internet of Things
    Ding, Xuyang
    Zhang, Shuai
    Song, Mengkai
    Ding, Xiaocong
    Li, Fagen
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (02): : 802 - 812
  • [33] Detection Tolerant Black-Box Adversarial Attack Against Automatic Modulation Classification With Deep Learning
    Qi, Peihan
    Jiang, Tao
    Wang, Lizhan
    Yuan, Xu
    Li, Zan
    IEEE TRANSACTIONS ON RELIABILITY, 2022, 71 (02) : 674 - 686
  • [34] Black-box Attacks on Spoofing Countermeasures Using Transferability of Adversarial Examples
    Zhang, Yuekai
    Jiang, Ziyan
    Villalba, Jesus
    Dehak, Najim
    INTERSPEECH 2020, 2020, : 4238 - 4242
  • [35] FePN: A robust feature purification network to defend against adversarial examples
    Cao, Dongliang
    Wei, Kaimin
    Wu, Yongdong
    Zhang, Jilian
    Feng, Bingwen
    Chen, Jinpeng
    COMPUTERS & SECURITY, 2023, 134
  • [36] Generating Adversarial Malware Examples Against Multiple Machine Learning Detectors
    Sang, Anyuan
    Wang, Zhipeng
    Yang, Li
    Zhou, Lu
    Jia, Junbo
    Yang, Huipeng
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2025,
  • [37] GAN-Based Siamese Neuron Network for Modulation Classification Against White-Box Adversarial Attacks
    Zhou, Xiaoyu
    Qi, Peihan
    Zhang, Weilin
    Zheng, Shilian
    Zhang, Ning
    Li, Zan
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (01) : 122 - 137
  • [38] Generating Fluent Chinese Adversarial Examples for Sentiment Classification
    Wang, Congyi
    Zeng, Jianping
    Wu, Chengrong
    2020 IEEE 14TH INTERNATIONAL CONFERENCE ON ANTI-COUNTERFEITING, SECURITY, AND IDENTIFICATION (ASID), 2020, : 149 - +
  • [39] On the Salience of Adversarial Examples
    Fernandez, Amanda
    ADVANCES IN VISUAL COMPUTING, ISVC 2019, PT II, 2019, 11845 : 221 - 232
  • [40] Perturbation Analysis of Learning Algorithms: Generation of Adversarial Examples From Classification to Regression
    Balda, Emilio Rafael
    Behboodi, Arash
    Mathar, Rudolf
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (23) : 6078 - 6091