A survey of robust adversarial training in pattern recognition: Fundamental, theory, and methodologies

被引:45
作者
Qian, Zhuang [1 ]
Huang, Kaizhu [2 ]
Wang, Qiu-Feng [1 ]
Zhang, Xu-Yao [3 ,4 ]
机构
[1] Xian Jiaotong Liverpool Univ, Sch Adv Technol, Suzhou, Peoples R China
[2] Duke Kunshan Univ, Data Sci Res Ctr, Suzhou, Peoples R China
[3] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
[4] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial examples; Adversarial training; Robust learning;
D O I
10.1016/j.patcog.2022.108889
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have achieved remarkable success in machine learning, computer vision, and pattern recognition in the last few decades. Recent studies, however, show that neural networks (both shallow and deep) may be easily fooled by certain imperceptibly perturbed input samples called adversarial examples. Such security vulnerability has resulted in a large body of research in recent years because real-world threats could be introduced due to the vast applications of neural networks. To address the robustness issue to adversarial examples particularly in pattern recognition, robust adversarial training has become one mainstream. Various ideas, methods, and applications have boomed in the field. Yet, a deep understanding of adversarial training including characteristics, interpretations, theories, and connections among different models has remained elusive. This paper presents a comprehensive survey trying to offer a systematic and structured investigation on robust adversarial training in pattern recognition. We start with fundamentals including definition, notations, and properties of adversarial examples. We then introduce a general theoretical framework with gradient regularization for defending against adversarial samples - robust adversarial training with visualizations and interpretations on why adversarial training can lead to model robustness. Connections will also be established between adversarial training and other traditional learning theories. After that, we summarize, review, and discuss various methodologies with defense/training algorithms in a structured way. Finally, we present analysis, outlook, and remarks on adversarial training. (C) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 114 条
[1]  
Andriushchenko Maksym, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12368), P484, DOI 10.1007/978-3-030-58592-1_29
[2]  
Andriushchenko M., 2020, Advances in Neural Information Processing Systems, P16048
[3]  
[Anonymous], 2016, INT C LEARN REPR
[4]  
Athalye A, 2018, PR MACH LEARN RES, V80
[5]  
Bai T, 2021, PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, P4312
[6]  
Balestriero Randall, 2021, arXiv
[7]  
Baluja S, 2018, AAAI CONF ARTIF INTE, P2687
[8]  
Bhagoji Arjun Nitin, 2018, 2018 52nd Annual Conference on Information Sciences and Systems (CISS), DOI 10.1109/CISS.2018.8362326
[9]   Wild patterns: Ten years after the rise of adversarial machine learning [J].
Biggio, Battista ;
Roli, Fabio .
PATTERN RECOGNITION, 2018, 84 :317-331
[10]  
Brendel W., 2017, PROC 6 INT C LEARN R