Understanding the robustness of graph neural networks against adversarial attacks

被引:0
作者
Wu, Tao [1 ,2 ]
Cui, Canyixing [1 ]
Xian, Xingping [2 ]
Qiao, Shaojie [3 ]
Wang, Chao [4 ]
Yuan, Lin [2 ]
Yu, Shui [5 ]
机构
[1] Chongqing Univ Posts & Telecommun, Sch Comp Sci & Technol, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Sch Cyber Secur & Informat Law, Chongqing 400065, Peoples R China
[3] Chengdu Univ Informat Technol, Sch Software Engn, Chengdu 610225, Peoples R China
[4] Chongqing Normal Univ, Sch Comp & Informat Sci, Chongqing 401331, Peoples R China
[5] Univ Technol Sydney, Sch Comp Sci, Sydney 2007, Australia
基金
中国国家自然科学基金;
关键词
Graph neural networks; Adversarial attacks; Adversarial robustness; Decision boundary; Adversarial transferability;
D O I
10.1016/j.knosys.2025.113714
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies have shown that graph neural networks (GNNs) are vulnerable to adversarial attacks, posing significant challenges to their deployment in safety-critical scenarios. This vulnerability has spurred a growing focus on designing robust GNNs. Despite this interest, current advancements have predominantly relied on empirical trial and error, resulting in a limited understanding of the robustness of GNNs against adversarial attacks. To address this issue, we conduct the first large-scale systematic study on the adversarial robustness of GNNs by considering the patterns of input graphs, the architecture of GNNs, and their model capacity, along with discussions on sensitive neurons and adversarial transferability. This work proposes a comprehensive empirical framework for analyzing the adversarial robustness of GNNs. To support the analysis of adversarial robustness in GNNs, we introduce two evaluation metrics: the confidence-based decision surface and the accuracy-based adversarial transferability rate. Through experimental analysis, we derive 11 actionable guidelines for designing robust GNNs, enabling model developers to gain deeper insights. The code of this study is available at https://github.com/star4455/GraphRE.
引用
收藏
页数:13
相关论文
共 62 条
[1]   Anomal-E: A self-supervised network intrusion detection system based on graph neural networks [J].
Caville, Evan ;
Lo, Wai Weng ;
Layeghy, Siamak ;
Portmann, Marius .
KNOWLEDGE-BASED SYSTEMS, 2022, 258
[2]   Single-Node Injection Label Specificity Attack on Graph Neural Networks via Reinforcement Learning [J].
Chen, Dayuan ;
Zhang, Jian ;
Lv, Yuqian ;
Wang, Jinhuan ;
Ni, Hongjie ;
Yu, Shanqing ;
Wang, Zhen ;
Xuan, Qi .
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (05) :6135-6150
[3]  
Chen JY, 2018, Arxiv, DOI arXiv:1809.02797
[4]  
Chen L, 2021, PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, P2249
[5]   A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability [J].
Dai, Enyan ;
Zhao, Tianxiang ;
Zhu, Huaisheng ;
Xu, Junjie ;
Guo, Zhimeng ;
Liu, Hui ;
Tang, Jiliang ;
Wang, Suhang .
MACHINE INTELLIGENCE RESEARCH, 2024, 21 (06) :1011-1061
[6]  
Dai HJ, 2018, PR MACH LEARN RES, V80
[7]  
Defferrard M, 2016, ADV NEUR IN, V29
[8]   Batch virtual adversarial training for graph convolutional networks [J].
Deng, Zhijie ;
Dong, Yinpeng ;
Zhu, Jun .
AI OPEN, 2023, 4 :73-79
[9]   Understanding neural network through neuron level visualization [J].
Dou, Hui ;
Shen, Furao ;
Zhao, Jian ;
Mu, Xinyu .
NEURAL NETWORKS, 2023, 168 :484-495
[10]   All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs [J].
Entezari, Negin ;
Al-Sayouri, Saba A. ;
Darvishzadeh, Amirali ;
Papalexakis, Evangelos E. .
PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, :169-177