Ally patches for spoliation of adversarial patches

被引:3
作者
Abdel-Hakim, Alaa E. [1 ,2 ]
机构
[1] Assiut Univ, Elect Engn Dept, Assiut 71516, Egypt
[2] Umm Al Qura Univ, Comp Sci Dept, Jamoum, Saudi Arabia
关键词
Adversarial patches; Ally patches; CNN; Deep neural networks;
D O I
10.1186/s40537-019-0213-4
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Adversarial attacks represent a serious evolving threat to the operation of deep neural networks. Recently, adversarial algorithms were developed to facilitate hallucination of deep neural networks for ordinary attackers. State-of-the-arts algorithms could generate offline printable adversarial patches that can be interspersed within fields of view of the capturing cameras in an innocently unnoticeable action. In this paper, we propose an algorithm to ravage the operation of these adversarial patches. The proposed algorithm uses intrinsic information contents of the input image to extract a set of ally patches. The extracted patches break the salience of the attacking adversarial patch to the network. To our knowledge, this is the first time to address the defense problem against such kinds of adversarial attacks by counter-processing the input image in order to ravage the effect of any possible adversarial patches. The classification decision is taken according to a late-fusion strategy applied to the independent classifications generated by the extracted patch alliance. Evaluation experiments were conducted on the 1000 classes of the ILSVRC benchmark. Different convolutional neural network models and varying-scale adversarial patches were used in the experimentation. Evaluation results showed the effectiveness of the proposed ally patches in reducing the success rates of adversarial patches.
引用
收藏
页数:14
相关论文
共 24 条
[1]  
[Anonymous], 2017, Agonistic Mourning: Political Dissidence and the Women in Black
[2]  
Athalye A, 2018, PR MACH LEARN RES, V80
[3]  
Brown, 2017, ADVERSARIAL PATCH
[4]  
Buckman J., 2018, SUBM INT C LEARN REP
[5]  
Carlini N., 2017, P 10 ACM WORKSHOP AR, P3, DOI 10.1145/3128572.3140444
[6]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[7]  
Carlini Nicholas., 2017, CoRR
[8]  
Evtimov I, 2017, ARXIV1707089451
[9]  
Goodfellow I J, 2015, P INT C LEARN REPR I
[10]  
He K., 2016, CVPR, DOI [10.1109/CVPR.2016.90, DOI 10.1109/CVPR.2016.90]