Towards Interpretable Defense Against Adversarial Attacks via Causal Inference

被引:0
|
作者
Min Ren [1 ,2 ]
Yun-Long Wang [2 ]
Zhao-Feng He [3 ]
机构
[1] University of Chinese Academy of Sciences
[2] Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition,Institute of Automation, Chinese Academy of Sciences
[3] Laboratory of Visual Computing and Intelligent System, Beijing University of Posts and Telecommunications
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论]; TP309 [安全保密];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ; 081201 ; 0839 ; 1402 ;
摘要
Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and efficient defense mechanisms against adversarial attacks. Most of the existing methods are just stopgaps for specific adversarial samples. The main obstacle is that how adversarial samples fool the deep learning models is still unclear. The underlying working mechanism of adversarial samples has not been well explored, and it is the bottleneck of adversarial attack defense. In this paper, we build a causal model to interpret the generation and performance of adversarial samples. The self-attention/transformer is adopted as a powerful tool in this causal model. Compared to existing methods, causality enables us to analyze adversarial samples more naturally and intrinsically. Based on this causal model, the working mechanism of adversarial samples is revealed, and instructive analysis is provided. Then, we propose simple and effective adversarial sample detection and recognition methods according to the revealed working mechanism. The causal insights enable us to detect and recognize adversarial samples without any extra model or training. Extensive experiments are conducted to demonstrate the effectiveness of the proposed methods. Our methods outperform the state-of-the-art defense methods under various adversarial attacks.
引用
收藏
页码:209 / 226
页数:18
相关论文
共 50 条
  • [31] Detection defense against adversarial attacks with saliency map
    Ye, Dengpan
    Chen, Chuanxi
    Liu, Changrui
    Wang, Hao
    Jiang, Shunzhi
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (12) : 10193 - 10210
  • [32] Defense against adversarial attacks via textual embeddings based on semantic associative field
    Huang, Jiacheng
    Chen, Long
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (01): : 289 - 301
  • [33] Symmetry Defense Against CNN Adversarial Perturbation Attacks
    Lindqvist, Blerta
    INFORMATION SECURITY, ISC 2023, 2023, 14411 : 142 - 160
  • [34] Universal Inverse Perturbation Defense Against Adversarial Attacks
    Chen J.-Y.
    Wu C.-A.
    Zheng H.-B.
    Wang W.
    Wen H.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (10): : 2172 - 2187
  • [35] DEFENSE AGAINST ADVERSARIAL ATTACKS ON SPOOFING COUNTERMEASURES OF ASV
    Wu, Haibin
    Liu, Songxiang
    Meng, Helen
    Lee, Hung-yi
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6564 - 6568
  • [36] Adversarial Defense via Learning to Generate Diverse Attacks
    Jang, Yunseok
    Zhao, Tianchen
    Hong, Seunghoon
    Lee, Honglak
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2740 - 2749
  • [37] MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
    Jia, Jinyuan
    Salem, Ahmed
    Backes, Michael
    Zhang, Yang
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 259 - 274
  • [38] AGS: Attribution Guided Sharpening as a Defense Against Adversarial Attacks
    Tobia, Javier Perez
    Braun, Phillip
    Narayan, Apurva
    ADVANCES IN INTELLIGENT DATA ANALYSIS XX, IDA 2022, 2022, 13205 : 225 - 236
  • [39] Defense-PointNet: Protecting PointNet Against Adversarial Attacks
    Zhang, Yu
    Liang, Gongbo
    Salem, Tawfiq
    Jacobs, Nathan
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 5654 - 5660
  • [40] Local Gradients Smoothing: Defense against localized adversarial attacks
    Naseer, Muzammal
    Khan, Salman H.
    Porikli, Fatih
    2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 1300 - 1307