A Statistical Physics Perspective: Understanding the Causality Behind Convolutional Neural Network Adversarial Vulnerability

被引:1
作者
Wang, Ke [1 ,2 ]
Zhu, Mingjia [3 ]
Chen, Zicong [3 ]
Weng, Jian [1 ,4 ,5 ,6 ]
Li, Ming [2 ]
Yiu, Siu-Ming [7 ]
Ding, Weiping [8 ]
Gu, Tianlong [1 ]
机构
[1] Jinan Univ, Minist Educ, Engn Res Ctr Trustworthy AI, Guangzhou 510632, Peoples R China
[2] Jinan Univ, Coll Cyber Secur, Guangzhou 510632, Peoples R China
[3] Jinan Univ, Coll Informat & Sci, Guangzhou 510632, Peoples R China
[4] Jinan Univ, Natl Joint Engn Res Ctr Network Secur Detect & Pr, Guangzhou 510632, Peoples R China
[5] Jinan Univ, Guangdong Key Lab Data Secur & Privacy Preserving, Guangzhou 510632, Peoples R China
[6] Jinan Univ, Guangdong Hong Kong Joint Lab Data Secur & Privac, Guangzhou 510632, Peoples R China
[7] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
[8] Nantong Univ, Sch Informat & Sci, Nantong 226019, Peoples R China
基金
中国国家自然科学基金;
关键词
Visualization; Decision making; Physics; Neurons; Neural networks; Convolutional neural networks; Mathematical models; Adversarial vulnerability; cascading failure; causality; convolutional neural network (CNN); statistical physics; ROBUSTNESS;
D O I
10.1109/TNNLS.2024.3359269
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The adversarial vulnerability of convolutional neural networks (CNNs) refers to the performance degradation of CNNs under adversarial attacks, leading to incorrect decisions. However, the causes of adversarial vulnerability in CNNs remain unknown. To address this issue, we propose a unique cross-scale analytical approach from a statistical physics perspective. It reveals that the huge amount of nonlinear effects inherent in CNNs is the fundamental cause for the formation and evolution of system vulnerability. Vulnerability is spontaneously formed on the macroscopic level after the symmetry of the system is broken through the nonlinear interaction between microscopic state order parameters. We develop a cascade failure algorithm, visualizing how micro perturbations on neurons' activation can cascade and influence macro decision paths. Our empirical results demonstrate the interplay between microlevel activation maps and macrolevel decision-making and provide a statistical physics perspective to understand the causality behind CNN vulnerability. Our work will help subsequent research to improve the adversarial robustness of CNNs.
引用
收藏
页码:2118 / 2132
页数:15
相关论文
共 54 条
  • [41] Tishby N, 2015, 2015 IEEE INFORMATION THEORY WORKSHOP (ITW)
  • [42] TramŠr F, 2017, Arxiv, DOI arXiv:1704.03453
  • [43] Vidal R, 2017, Arxiv, DOI arXiv:1712.04741
  • [44] Uncovering Hidden Vulnerabilities in Convolutional Neural Networks through Graph-based Adversarial Robustness Evaluation
    Wang, Ke
    Chen, Zicong
    Dang, Xilin
    Fan, Xuan
    Han, Xuming
    Chen, Chien-Ming
    Ding, Weiping
    Yiu, Siu-Ming
    Weng, Jian
    [J]. PATTERN RECOGNITION, 2023, 143
  • [45] Statistics-Physics-Based Interpretation of the Classification Reliability of Convolutional Neural Networks in Industrial Automation Domain
    Wang, Ke
    Chen, Zicong
    Zhu, Mingjia
    Yiu, Siu-Ming
    Chen, Chien-Ming
    Hassan, Mohammad Mehedi
    Izzo, Stefano
    Fortino, Giancario
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (02) : 2165 - 2172
  • [46] Interpreting Adversarial Examples and Robustness for Deep Learning-Based Auto-Driving Systems
    Wang, Ke
    Li, Fengjun
    Chen, Chien-Ming
    Hassan, Mohammad Mehedi
    Long, Jinyi
    Kumar, Neeraj
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) : 9755 - 9764
  • [47] Toward Robust Discriminative Projections Learning Against Adversarial Patch Attacks
    Wang, Zheng
    Nie, Feiping
    Wang, Hua
    Huang, Heng
    Wang, Fei
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 18784 - 18798
  • [48] CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization
    Wang, Zijie J.
    Turko, Robert
    Shaikh, Omar
    Park, Haekyu
    Das, Nilaksh
    Hohman, Fred
    Kahng, Minsuk
    Chau, Duen Horng
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2021, 27 (02) : 1396 - 1406
  • [49] Enhancing Explainability of Neural Networks Through Architecture Constraints
    Yang, Zebin
    Zhang, Aijun
    Sudjianto, Agus
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (06) : 2610 - 2621
  • [50] Understanding How Pretraining Regularizes Deep Learning Algorithms
    Yao, Yu
    Yu, Baosheng
    Gong, Chen
    Liu, Tongliang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (09) : 5828 - 5840