Adversarial Attacks on Deep Learning Models of Computer Vision: A Survey

被引:8
作者
Ding, Jia [1 ]
Xu, Zhiwu [1 ]
机构
[1] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen, Peoples R China
来源
ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT III | 2020年 / 12454卷
基金
中国国家自然科学基金;
关键词
Deep learning; Adversarial attacks; Black-box attack; White-box attack; Machine learning; ROBUSTNESS;
D O I
10.1007/978-3-030-60248-2_27
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning plays a significant role in academic and commercial fields. However, deep neural networks are vulnerable to adversarial attacks, which limits its applications in safety-critical areas, such as autonomous driving, surveillance, and drones and robotics. Due to the rapid development of adversarial examples in computer vision, many novel and interesting adversarial attacks are not covered by existing surveys and could not be categorized according to existing taxonomies. In this paper, we present an improved taxonomy for adversarial attacks, which subsumes existing taxonomies, and investigate and summarize the latest attacks in computer vision comprehensively with respect to the improved taxonomy. Finally, We also discuss some potential research directions.
引用
收藏
页码:396 / 408
页数:13
相关论文
共 49 条
[1]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[2]  
[Anonymous], 2018, 6 INT C LEARN REPR I
[3]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[4]  
CHEN L., 2020, EUROPEAN C COMPUTER
[5]  
Chen PY, 2017, PROCEEDINGS OF THE 10TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2017, P15, DOI 10.1145/3128572.3140448
[6]   Sparse and Imperceivable Adversarial Attacks [J].
Croce, Francesco ;
Hein, Matthias .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4723-4731
[7]   Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks [J].
Dong, Yinpeng ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4307-4316
[8]   Boosting Adversarial Attacks with Momentum [J].
Dong, Yinpeng ;
Liao, Fangzhou ;
Pang, Tianyu ;
Su, Hang ;
Zhu, Jun ;
Hu, Xiaolin ;
Li, Jianguo .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :9185-9193
[9]  
Engstrom Logan, 2017, A rotation and a translation suffice. Fooling cans with simple transformations
[10]   The Robustness of Deep Networks A geometrical perspective [J].
Fawzi, Alhussein ;
Moosavi-Dezfooli, Seyed-Mohsen ;
Frossard, Pascal .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) :50-62