The technology of adversarial attacks in signal recognition

被引:6
作者
Zhao, Haojun [1 ]
Tian, Qiao [1 ,2 ]
Pan, Lei [3 ]
Lin, Yun [1 ]
机构
[1] Harbin Engn Univ, Coll Informat & Commun Engn, Harbin, Peoples R China
[2] Harbin Engn Univ, Coll Comp Sci & Technol, Harbin, Peoples R China
[3] Harbin Qianfan Technol Co LTD, Harbin, Peoples R China
关键词
Adversarial attack; Signal recognition; Deep learning; Wireless security; AUTOMATIC MODULATION CLASSIFICATION; DEEP; IDENTIFICATION;
D O I
10.1016/j.phycom.2020.101199
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The wide application of contour stellar images has helped researchers transform signal classification problems into image classification problems to solve signal recognition based on deep learning. However, deep neural networks (DNN) are quite vulnerable to adversarial examples, thus simply evaluating the adversarial attack performance on the signal sequence recognition model cannot meet the current security requirements. From the perspective of an attacker, this study converts individual signals into stellar contour images, and then generates adversarial examples to evaluate the adversarial attack impacts. The results show that whether the current input sample is a signal sequence or a converted image, the DNN is vulnerable to the threat of adversarial examples. In the selected methods, whether it is under different perturbations or signal-to-noise ratio (SNRs), the momentum iteration method has the best performance among them, and under the perturbation of 0.01, the attack performance is more than 10% higher than the fast gradient sign method. Also, to measure the invisibility of adversarial examples, the contour stellar images before and after the attack were compared to maintain a balance between the attack success rate and the attack concealment. (c) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页数:9
相关论文
共 36 条
[1]  
[Anonymous], 2019, ARXIV190210674
[2]   On Physical-Layer Identification of Wireless Devices [J].
Danev, Boris ;
Zanetti, Davide ;
Capkun, Srdjan .
ACM COMPUTING SURVEYS, 2012, 45 (01)
[3]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[4]   Evaluating Adversarial Evasion Attacks in the Context of Wireless Communications [J].
Flowers, Bryse ;
Buehrer, R. Michael ;
Headley, William C. .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 :1102-1113
[5]  
Goodfellow I., 2015, INT C LEARN REPR MAR
[6]   The Individual Identification Method of Wireless Device Based on A Robust Dimensionality Reduction Model of Hybrid Feature Information [J].
Han, Hui ;
Li, Jingchao ;
Chen, Xiang .
MOBILE NETWORKS & APPLICATIONS, 2018, 23 (04) :709-716
[7]  
Jafari H, 2018, IEEE MILIT COMMUN C, P913
[8]  
Karmon Danny, 2018, ARXIV180102608
[9]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[10]  
Kurakin A., 2017, INT C LEARN REPR