Towards Interpretable Defense Against Adversarial Attacks via Causal Inference

被引:0
|
作者
Min Ren [1 ,2 ]
Yun-Long Wang [2 ]
Zhao-Feng He [3 ]
机构
[1] University of Chinese Academy of Sciences
[2] Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition,Institute of Automation, Chinese Academy of Sciences
[3] Laboratory of Visual Computing and Intelligent System, Beijing University of Posts and Telecommunications
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论]; TP309 [安全保密];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ; 081201 ; 0839 ; 1402 ;
摘要
Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and efficient defense mechanisms against adversarial attacks. Most of the existing methods are just stopgaps for specific adversarial samples. The main obstacle is that how adversarial samples fool the deep learning models is still unclear. The underlying working mechanism of adversarial samples has not been well explored, and it is the bottleneck of adversarial attack defense. In this paper, we build a causal model to interpret the generation and performance of adversarial samples. The self-attention/transformer is adopted as a powerful tool in this causal model. Compared to existing methods, causality enables us to analyze adversarial samples more naturally and intrinsically. Based on this causal model, the working mechanism of adversarial samples is revealed, and instructive analysis is provided. Then, we propose simple and effective adversarial sample detection and recognition methods according to the revealed working mechanism. The causal insights enable us to detect and recognize adversarial samples without any extra model or training. Extensive experiments are conducted to demonstrate the effectiveness of the proposed methods. Our methods outperform the state-of-the-art defense methods under various adversarial attacks.
引用
收藏
页码:209 / 226
页数:18
相关论文
共 50 条
  • [41] Using Uncertainty as a Defense Against Adversarial Attacks for Tabular Datasets
    Santhosh, Poornima
    Gressel, Gilad
    Darling, Michael C.
    AI 2022: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13728 : 719 - 732
  • [42] A NEURO-INSPIRED AUTOENCODING DEFENSE AGAINST ADVERSARIAL ATTACKS
    Bakiskan, Can
    Cekic, Metehan
    Sezer, Ahmet Dundar
    Madhow, Upamanyu
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3922 - 3926
  • [43] Assured Deep Learning: Practical Defense Against Adversarial Attacks
    Rouhani, Bita Darvish
    Samragh, Mohammad
    Javaheripi, Mojan
    Javidi, Tara
    Koushanfar, Farinaz
    2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
  • [44] MAEDefense: An Effective Masked AutoEncoder Defense against Adversarial Attacks
    Lyu, Wanli
    Wu, Mengjiang
    Yin, Zhaoxia
    Luo, Bin
    2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2023, : 1915 - 1922
  • [45] Deadversarial Multiverse Network - A defense architecture against adversarial attacks
    Berg, Aviram
    Tulchinsky, Elin
    Zaidenerg, Nezer Jacob
    SYSTOR '19: PROCEEDINGS OF THE 12TH ACM INTERNATIONAL SYSTEMS AND STORAGE CONFERENCE, 2019, : 190 - 190
  • [46] Defense against adversarial attacks based on color space transformation
    Wang, Haoyu
    Wu, Chunhua
    Zheng, Kangfeng
    NEURAL NETWORKS, 2024, 173
  • [47] AdvRefactor: A Resampling-Based Defense Against Adversarial Attacks
    Jiang, Jianguo
    Li, Boquan
    Yu, Min
    Liu, Chao
    Sun, Jianguo
    Huang, Weiqing
    Lv, Zhiqiang
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2018, PT II, 2018, 11165 : 815 - 825
  • [48] Boundary Defense Against Black-box Adversarial Attacks
    Aithal, Manjushree B.
    Li, Xiaohua
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2349 - 2356
  • [49] Image Super-Resolution as a Defense Against Adversarial Attacks
    Mustafa, Aamir
    Khan, Salman H.
    Hayat, Munawar
    Shen, Jianbing
    Shao, Ling
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 1711 - 1724
  • [50] One Parameter Defense-Defending Against Data Inference Attacks via Differential Privacy
    Ye, Dayong
    Shen, Sheng
    Zhu, Tianqing
    Liu, Bo
    Zhou, Wanlei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1466 - 1480