Perturbation Initialization, Adam-Nesterov and Quasi-Hyperbolic Momentum for Adversarial Examples

被引:0
|
作者
Zou J.-H. [1 ]
Duan Y.-X. [1 ,2 ]
Ren C.-L. [3 ]
Qiu J.-Y. [4 ]
Zhou X.-Y. [1 ]
Pan Z.-S. [1 ]
机构
[1] Command and Control Engineering College, Army Engineering University of PLA, Nanjing
[2] Zhenjiang Campus, Army Military Transportation University of PLA, Zhenjiang
[3] North China Institute of Computer Technology, Beijing
[4] Mathematical Engineering and Advanced Computing, Jiangnan Institute of Computing Technology, Wuxi
来源
关键词
Adam-Nesterov method; Adversarial examples; Perturbation initialization; Quasi-hyperbolic momentum method; Transferability;
D O I
10.12263/DZXB.20200839
中图分类号
学科分类号
摘要
Deep neural networks(DNNs) have made great breakthrough in many pattern recognition tasks. However, relevant research shows that the DNNs are vulnerable to adversarial examples. In this paper, we study the transferability of adversarial examples in the classification task, and propose perturbation initialization, the quasi-hyperbolic momentum iterative fast gradient sign method(QHMI-FGSM) and the adam-nesterov iterative fast gradient sign method(ANI-FGSM). We propose perturbation initialization method called pixel shift in adversarial attack. Furthermore, QHMI-FGSM and ANI-FGSM proposed in this paper are the improvements on the existing momentum iterative fast gradient sign method(MI-FGSM) and nesterov iterative fast gradient sign method(NI-FGSM). Additionally, perturbation initialization, QHMI-FGSM and ANI-FGSM are easily integrated into other existing methods, which can significantly improve the success rates of black-box attacks without additional running time and computing resources. Experimental results show that our best attack ANI-TI-DIQHM* can fool six classic black-box defense models with an average success rate of 88.68%, and fool four advance black-box defense models with an average success rate of 82.77%, which are higher than the state-of-the-art results. © 2022, Chinese Institute of Electronics. All right reserved.
引用
收藏
页码:207 / 216
页数:9
相关论文
共 26 条
  • [1] HE K M, ZHANG X G, REN S Q, Et al., Identity mappings in deep residual networks, 2016 14th European Conference on Computer Vision, pp. 630-645, (2016)
  • [2] HE K M, GKIOXARI G, DOLLAR P, Et al., Mask R-CNN, 2017 IEEE International Conference on Computer Vision, pp. 2980-2988, (2017)
  • [3] GOODFELLOW I J, SHLENS J, SZEGEDY C., Explaining and harnessing adversarial examples, 2015 3rd International Conference on Learning Representations, pp. 1-11, (2015)
  • [4] LIU Y P, CHEN X Y, LIU C, Et al., Delving into transferable adversarial examples and black-box attacks, 2017 5th International Conference on Learning Representations, pp. 1-24, (2017)
  • [5] ATHALYE A, ENGSTROM L, ILYAS A, Et al., Synthesizing robust adversarial examples, 2018 35th International Conference on Machine Learning, pp. 284-293, (2018)
  • [6] TRAMER F, KURAKIN A, PAPERNOT N, Et al., Ensemble adversarial training: Attacks and defenses, 2018 6th International Conference on Learning Representations, pp. 1-22, (2018)
  • [7] LIAO F Z, LIANG M, DONG Y P, Et al., Defense against adversarial attacks using high-level representation guided denoiser, 2018 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778-1787, (2018)
  • [8] XIE C H, WANG J Y, ZHANG Z S, Et al., Mitigating adversarial effects through randomization, 2018 6th International Conference on Learning Representations, pp. 1-16, (2018)
  • [9] RAGHUNATHAN A, STEINHARDT J, LIANG P., Certified defenses against adversarial examples, 2018 6th International Conference on Learning Representations, pp. 1-15, (2018)
  • [10] RAUBER J, BRENDEL W, BETHGE M., Foolbox v0.8.0: A python toolbox to benchmark the robustness of machine learning models, (2020)