A Black-Box Adversarial Attack Method via Nesterov Accelerated Gradient and Rewiring Towards Attacking Graph Neural Networks

被引:10
作者
Zhao, Shu [1 ,2 ,3 ]
Wang, Wenyu [1 ,2 ,3 ]
Du, Ziwei [1 ,2 ,3 ]
Chen, Jie [1 ,2 ,3 ]
Duan, Zhen [1 ,2 ,3 ]
机构
[1] Anhui Univ, Key Lab Intelligent Comp & Signal Proc, Minist Educ, Hefei 230601, Anhui, Peoples R China
[2] Anhui Univ, Sch Comp Sci & Technol, Hefei 230601, Anhui, Peoples R China
[3] Anhui Univ, Informat Mat & Intelligent Sensing Lab Anhui Prov, Hefei 230601, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Closed box; Perturbation methods; Optimization; Glass box; Task analysis; Reinforcement learning; Graph neural networks; Adversarial attack; black-box attack; gradient-based attack; graph neural networks; node classification;
D O I
10.1109/TBDATA.2023.3296936
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to well-designed and imperceptible adversarial attack. Attacks utilizing gradient information are widely used in the field of attack due to their simplicity and efficiency. However, several challenges are faced by gradient-based attacks: 1) Generate perturbations use white-box attacks (i.e., requiring access to the full knowledge of the model), which is not practical in the real world; 2) It is easy to drop into local optima; and 3) The perturbation budget is not limited and might be detected even if the number of modified edges is small. Faced with the above challenges, this article proposes a black-box adversarial attack method, named NAG-R, which consists of two modules known as Nesterov Accelerated Gradient attack module and Rewiring optimization module. Specifically, inspired by adversarial attacks on images, the first module generates perturbations by introducing Nesterov Accelerated Gradient (NAG) to avoid falling into local optima. The second module keeps the fundamental properties of the graph (e.g., the total degree of the graph) unchanged through a rewiring operation, thus ensuring that perturbations are imperceptible. Intensive experiments show that our method has significant attack success and transferability over existing state-of-the-art gradient-based attack methods.
引用
收藏
页码:1586 / 1597
页数:12
相关论文
共 42 条
[31]   PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection [J].
Guo, Hanqing ;
Wang, Guangjing ;
Wang, Yuanda ;
Chen, Bocheng ;
Yan, Qiben ;
Xiao, Li .
PROCEEDINGS OF THE 26TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2023, 2023, :366-380
[32]   High-transferability black-box attack of binary image segmentation via adversarial example augmentation [J].
Zhu, Xuebiao ;
Chen, Wu ;
Jiang, Qiuping .
DISPLAYS, 2025, 87
[33]   An adversarial attack detection method in deep neural networks based on re-attacking approach [J].
Ahmadi, Morteza Ali ;
Dianat, Rouhollah ;
Amirkhani, Hossein .
MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (07) :10985-11014
[34]   An adversarial attack detection method in deep neural networks based on re-attacking approach [J].
Morteza Ali Ahmadi ;
Rouhollah Dianat ;
Hossein Amirkhani .
Multimedia Tools and Applications, 2021, 80 :10985-11014
[35]   Black-Box Buster: A Robust Zero-Shot Transfer-Based Adversarial Attack Method [J].
Zhang, Yuxuan ;
Wang, Zhaoyang ;
Zhang, Boyang ;
Wen, Yu ;
Meng, Dan .
INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2021), PT II, 2021, 12919 :39-54
[36]   An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Convolutional Neural Networks [J].
Lapid, Raz ;
Haramaty, Zvika ;
Sipper, Moshe .
ALGORITHMS, 2022, 15 (11)
[37]   AdverseGen: A Practical Tool for Generating Adversarial Examples to Deep Neural Networks Using Black-Box Approaches [J].
Zhang, Keyuan ;
Wu, Kaiyue ;
Chen, Siyu ;
Zhao, Yunce ;
Yao, Xin .
ARTIFICIAL INTELLIGENCE XXXVIII, 2021, 13101 :313-326
[38]   RUP-GAN: A Black-Box Attack Method for Social Intelligence Recommendation Systems Based on Adversarial Learning [J].
Yu, Siyang ;
Duan, Mingxing ;
Wang, Kezhi ;
Yang, Shenghong .
BIG DATA MINING AND ANALYTICS, 2025, 8 (04) :820-836
[39]   Black-box Attack Algorithm for SAR-ATR Deep Neural Networks Based on MI-FGSM [J].
Wan X. ;
Liu W. ;
Niu C. ;
Lu W. .
Journal of Radars, 2024, 13 (03) :714-729
[40]   Data reduction for black-box adversarial attacks against deep neural networks based on side-channel attacks [J].
Zhou, Hanxun ;
Liu, Zhihui ;
Hu, Yufeng ;
Zhang, Shuo ;
Kang, Longyu ;
Feng, Yong ;
Wang, Yan ;
Guo, Wei ;
Zou, Cliff C. .
COMPUTERS & SECURITY, 2025, 153