Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks

被引:4
|
作者
Sivamani, Kirthi Shankar [1 ]
Sahay, Rajeev [1 ]
Gamal, Aly El [1 ]
机构
[1] Department of Electrical and Computer Engineering, Purdue University, West Lafayette,IN,47907, United States
来源
IEEE Letters of the Computer Society | 2020年 / 3卷 / 01期
关键词
Economic and social effects - Deep learning;
D O I
10.1109/LOCS.2020.2990897
中图分类号
学科分类号
摘要
Deep learning models are known to be vulnerable to specifically crafted adversarial inputs that are quasi-imperceptible to humans. We propose a novel method to detect adversarial inputs, by augmenting the main classification network with multiple binary detectors (observer networks) which take inputs from the hidden layers of the original network (convolutional kernel outputs) and classify the input as clean or adversarial. During inference, the detectors are treated as a part of an ensemble network and the input is deemed adversarial if at least half of the detectors classify it as so. The proposed method addresses the trade-off between accuracy of classification on clean and adversarial samples, as the original classification network is not modified during the detection process. The use of multiple observer networks makes attacking the detection mechanism non-trivial even when the attacker is aware of the victim classifier. We achieve a 99.5 percent detection accuracy on the MNIST dataset and 97.5 percent on the CIFAR-10 dataset using the Fast Gradient Sign Attack in a semi-white box setup. The number of false positive detections is a mere 0.12 percent in the worst case scenario. © 2018 IEEE.
引用
收藏
页码:25 / 28
相关论文
empty
未找到相关数据