Adversarial Robustness of Deep Learning: Theory, Algorithms, and Applications

被引:10
作者
Ruan, Wenjie [1 ]
Yi, Xinping [2 ]
Huang, Xiaowei [2 ]
机构
[1] Univ Exeter, Exeter, Devon, England
[2] Univ Liverpool, Liverpool, Merseyside, England
来源
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021 | 2021年
基金
英国工程与自然科学研究理事会;
关键词
Robustness; Deep Learning; Adversarial Attacks; Defence; Verification; Safety; Neural Networks; Tutorial; AI; Adversarial Examples;
D O I
10.1145/3459637.3482029
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning, presenting a well-structured review of up-to-date techniques to assess the vulnerability of various types of deep learning models to adversarial examples. This tutorial will particularly highlight state-of-the-art techniques in adversarial attacks and robustness verification of deep neural networks (DNNs). We will also introduce some effective countermeasures to improve robustness of deep learning models, with a particular focus on adversarial training. We aim to provide a comprehensive overall picture about this emerging direction and enable the community to be aware of the urgency and importance of designing robust deep learning models in safety-critical data analytical applications, ultimately enabling the end-users to trust deep learning classifiers. We will also summarize potential research directions concerning the adversarial robustness of deep learning, and its potential benefits to enable accountable and trustworthy deep learning-based data analytical systems and applications.
引用
收藏
页码:4866 / 4869
页数:4
相关论文
共 47 条
  • [1] Abdullah H., 2019, ARXIV190405734
  • [2] Berthier Nicolas, 2021, ARXIV210801734
  • [3] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [4] Collobert R, 2011, J MACH LEARN RES, V12, P2493
  • [5] A guide to deep learning in healthcare
    Esteva, Andre
    Robicquet, Alexandre
    Ramsundar, Bharath
    Kuleshov, Volodymyr
    DePristo, Mark
    Chou, Katherine
    Cui, Claire
    Corrado, Greg
    Thrun, Sebastian
    Dean, Jeff
    [J]. NATURE MEDICINE, 2019, 25 (01) : 24 - 29
  • [6] AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation
    Gehr, Timon
    Mirman, Matthew
    Drachsler-Cohen, Dana
    Tsankov, Petar
    Chaudhuri, Swarat
    Vechev, Martin
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2018, : 3 - 18
  • [7] Making Machine Learning Robust Against Adversarial Inputs
    Goodfellow, Ian
    McDaniel, Patrick
    Papernot, Nicolas
    [J]. COMMUNICATIONS OF THE ACM, 2018, 61 (07) : 56 - 66
  • [8] Goodfellow Ian J, ICLR 2015
  • [9] Hamdi Abdullah, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12357), P241, DOI 10.1007/978-3-030-58610-2_15
  • [10] Coverage-Guided Testing for Recurrent Neural Networks
    Huang, Wei
    Sun, Youcheng
    Zhao, Xingyu
    Sharp, James
    Ruan, Wenjie
    Meng, Jie
    Huang, Xiaowei
    [J]. IEEE TRANSACTIONS ON RELIABILITY, 2022, 71 (03) : 1191 - 1206