Single-Node Injection Label Specificity Attack on Graph Neural Networks via Reinforcement Learning

被引:1
作者
Chen, Dayuan [1 ,2 ]
Zhang, Jian [2 ,3 ]
Lv, Yuqian [1 ,2 ]
Wang, Jinhuan [1 ,2 ]
Ni, Hongjie [4 ]
Yu, Shanqing [1 ,2 ]
Wang, Zhen [3 ,5 ]
Xuan, Qi [1 ,2 ]
机构
[1] Zhejiang Univ Technol, Coll Informat Engn, Inst Cyberspace Secur, Hangzhou 310023, Peoples R China
[2] ZJUT, Binjiang Cyberspace Secur Inst, Hangzhou 310056, Peoples R China
[3] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310018, Peoples R China
[4] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou 310023, Peoples R China
[5] Hangzhou Dianzi Univ, ZhuoYue Honors Coll, Hangzhou 310018, Peoples R China
基金
中国国家自然科学基金;
关键词
Closed box; Training; Cyberspace; Predictive models; Perturbation methods; Glass box; Vectors; Graph injection attack (GIA); graph neural networks (GNN); label specificity attack; reinforcement learning;
D O I
10.1109/TCSS.2024.3377554
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural networks (GNNs) have achieved remarkable success in various real-world applications. However, recent studies highlight the vulnerability of GNNs to malicious perturbations. Previous adversaries primarily focus on graph modifications or node injections to existing graphs, yielding promising results but with notable limitations. Graph modification attack (GMA) requires manipulation of the original graph, which is often impractical, while graph injection attack (GIA) necessitates training a surrogate model in the black-box setting, leading to significant performance degradation due to divergence between the surrogate architecture and the actual victim model. Furthermore, most methods concentrate on a single attack goal and lack a generalizable adversary to develop distinct attack strategies for diverse goals, thus limiting precise control over victim model behavior in real-world scenarios. To address these issues, we present a gradient-free generalizable adversary that injects a single malicious node to manipulate the classification result of a target node in the black-box evasion setting. Specifically, we model the single-node injection label specificity attack as a Markov decision process (MDP) and propose gradient-free generalizable single node injection attack, namely G(2)-SNIA, a reinforcement learning framework employing proximal policy optimization (PPO). By directly querying the victim model, G(2)-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets. Through comprehensive experiments over three acknowledged benchmark datasets and four prominent GNNs in the most challenging and realistic scenario, we demonstrate the superior performance of our proposed G(2)-SNIA over the existing state-of-the-art baselines. Moreover, by comparing G(2)-SNIA with multiple white-box evasion baselines, we confirm its capacity to generate solutions comparable to those of the best adversaries.
引用
收藏
页码:6135 / 6150
页数:16
相关论文
共 80 条
[1]  
Bhagat S., 2011, ARXIV
[2]  
Bojchevski Aleksandar, 2017, arXiv
[3]  
Chen Jinghui, 2018, arXiv
[4]   MGA: Momentum Gradient Attack on Network [J].
Chen, Jinyin ;
Chen, Yixian ;
Zheng, Haibin ;
Shen, Shijing ;
Yu, Shanqing ;
Zhang, Dan ;
Xuan, Qi .
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2021, 8 (01) :99-109
[5]   GA-Based Q-Attack on Community Detection [J].
Chen, Jinyin ;
Chen, Lihong ;
Chen, Yixian ;
Zhao, Minghao ;
Yu, Shanqing ;
Xuan, Qi ;
Yang, Xiaoniu .
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2019, 6 (03) :491-503
[6]  
Chen Liangyu, 2022, ARXIV
[7]  
Chen M, 2020, PR MACH LEARN RES, V119
[8]  
Chen Y., 2022, arXiv
[9]  
Chen Z., 2017, arXiv
[10]   GGT: Graph-guided testing for adversarial sample detection of deep neural network [J].
Chen, Zuohui ;
Wang, Renxuan ;
Xiang, Jingyang ;
Yu, Yue ;
Xia, Xin ;
Ji, Shouling ;
Xuan, Qi ;
Yang, Xiaoniu .
COMPUTERS & SECURITY, 2024, 140