Targeted Adversarial Examples Against RF Deep Classifiers

被引:31
作者
Kokalj-Filipovic, Silvija [1 ]
Miller, Rob [1 ]
Morman, Joshua [1 ]
机构
[1] Perspecta Labs Inc, Basking Ridge, NJ 07920 USA
来源
PROCEEDINGS OF THE 2019 ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING (WISEML '19) | 2019年
关键词
RF machine learning; RFML; neural networks; deep learning; adversarial attack to RFML; wireless spectrum sensing; ModRec attack; radio frequency adversarial examples; RF AdExs; wireless protocol classification;
D O I
10.1145/3324921.3328792
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Adversarial examples (AdExs) in machine learning for classification of radio frequency (RF) signals can be created in a targeted manner such that they go beyond general misclassification and result in the detection of a specific targeted class. Moreover, these drastic, targeted misclassifications can be achieved with minimal waveform perturbations, resulting in catastrophic impact to deep learning based spectrum sensing applications (e.g. WiFi is mistaken for Bluetooth). This work addresses targeted deep learning AdExs, specifically those obtained using the Carlini-Wagner algorithm, and analyzes previously introduced defense mechanisms that performed successfully against non-targeted FGSM-based attacks. To analyze the effects of the Carlini-Wagner attack, and the defense mechanisms, we trained neural networks on two datasets. The first dataset is a subset of the DeepSig dataset, comprised of three synthetic modulations BPSK, QPSK, 8-PSK, which we use to train a simple network for Modulation Recognition. The second dataset contains real-world, well-labeled, curated data from the 2.4 GHz Industrial, Scientific and Medical (ISM) band, that we use to train a network for wireless technology (protocol) classification using three classes: WiFi 802.11n, Bluetooth (BT) and ZigBee. We show that for attacks of limited intensity the impact of the attack in terms of percentage of misclassifications is similar for both datasets, and that the proposed defense is effective in both cases. Finally, we use our ISM data to show that the targeted attack is effective against the deep learning classifier but not against a classical demodulator.
引用
收藏
页码:6 / 11
页数:6
相关论文
共 19 条
[1]  
[Anonymous], 190411874 ARXIV
[2]  
[Anonymous], 2017, DEEP LEARNING MODELS
[3]  
[Anonymous], 2017, IEEE T COGNITIVE COM
[4]  
[Anonymous], 2008, KOLMOGOROV SMIRNOV T
[5]  
[Anonymous], 2018, IEEE J SELECTED TOPI
[6]  
[Anonymous], 2017, IEEE S SEC PRIV SP
[7]  
[Anonymous], 7 INT S WIR COMM SYS
[8]  
[Anonymous], IEEE COMPUTER VISION
[9]  
[Anonymous], 2019, INT C MIL COMM INF S
[10]  
[Anonymous], 190206044 ARXIV