Adversarial Attacks and Defenses: Frontiers, Advances and Practice

被引:0
作者
Xu, Han [1 ]
Li, Yaxin [1 ]
Jin, Wei [1 ]
Tang, Jiliang [1 ]
机构
[1] Michigan State Univ, Comp Sci & Engn, E Lansing, MI 48824 USA
来源
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING | 2020年
基金
美国国家科学基金会;
关键词
Deep Learning; Neural Networks; Adversarial Examples; Robustness;
D O I
10.1145/3394486.340646
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains. However, the existence of adversarial examples leaves us a big hesitation when applying DNN models on safety-critical tasks such as autonomous vehicles. These adversarial examples are intentionally crafted instances, either appearing in the train or test phase, which can fool the DNN models to make severe mistakes. Therefore, people are dedicated to devising more robust models to resist adversarial examples, but usually they are broken by new stronger attacks. This arms-race between adversarial attacks and defenses has been drawn increasing attention in recent years. In this tutorial, we provide a comprehensive overview on the frontiers and advances of adversarial attacks and their countermeasures. In particular, we give a detailed introduction of different types of attacks under different scenarios, including evasion and poisoning attacks, and white-box and black box attacks. We will also discuss how the defending strategies developed to against these attacks, and how new attacks come out to break these defenses. Moreover, we will introduce adversarial attacks and defenses in other data domains, especially in graph structured data. Then, we introduce DeepRobust, a Pytorch adversarial learning library which aims to build a comprehensive and easy-to-use platform to foster this research field. Finally, we summarize the tutorial with discussions on open issues and challenges about adversarial attacks and defenses. The tutorial official website is at https://sites.google.com/view/kdd-2020-attack-and-defense.
引用
收藏
页码:3541 / 3542
页数:2
相关论文
共 7 条
[1]  
Athalye A, 2018, PR MACH LEARN RES, V80
[2]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[3]  
Goodfellow I. J., 2014, ARXIV14126572
[4]  
Jin W., 2020, ARXIV200300653
[5]  
Li Y, 2020, ARXIV PREPRINT ARXIV
[6]  
Szegedy C, 2014, Arxiv, DOI [arXiv:1312.6199, DOI 10.1109/CVPR.2015.7298594]
[7]   Adversarial Attacks and Defenses in Images, Graphs and Text: A Review [J].
Xu, Han ;
Ma, Yao ;
Liu, Hao-Chen ;
Deb, Debayan ;
Liu, Hui ;
Tang, Ji-Liang ;
Jain, Anil K. .
INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, 2020, 17 (02) :151-178