Boosting Adversarial Attacks with Momentum

被引:1852
作者
Dong, Yinpeng [1 ,2 ,3 ]
Liao, Fangzhou [1 ,2 ,3 ]
Pang, Tianyu [1 ,2 ,3 ]
Su, Hang [1 ,2 ,3 ]
Zhu, Jun [1 ,2 ,3 ]
Hu, Xiaolin [1 ,2 ,3 ]
Li, Jianguo [4 ]
机构
[1] Tsinghua Lab Brain & Intelligence, Dept Comp Sci & Technol, Tsinghua 100084, Peoples R China
[2] Beijing Natl Res Ctr Informat Sci & Technol, BNRist Lab, Beijing, Peoples R China
[3] Tsinghua Univ, Beijing 100084, Peoples R China
[4] Intel Labs China, Beijing, Peoples R China
来源
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2018年
基金
北京市自然科学基金;
关键词
D O I
10.1109/CVPR.2018.00957
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.
引用
收藏
页码:9185 / 9193
页数:9
相关论文
共 24 条
[1]  
[Anonymous], 2018, ICLR
[2]  
[Anonymous], 1994, NIPS
[3]  
[Anonymous], 2016, ECCV
[4]  
[Anonymous], 2014, ICLR
[5]  
[Anonymous], 1964, COMP MATH MATH PHYS+
[6]  
[Anonymous], 2017, ICLR
[7]  
[Anonymous], 2013, INT C MACHINE LEARNI
[8]  
[Anonymous], ICML
[9]  
[Anonymous], 2004, ICML
[10]  
[Anonymous], 2017, ICLR