A Black-Box Adversarial Attack Method via Nesterov Accelerated Gradient and Rewiring Towards Attacking Graph Neural Networks

被引:10
作者
Zhao, Shu [1 ,2 ,3 ]
Wang, Wenyu [1 ,2 ,3 ]
Du, Ziwei [1 ,2 ,3 ]
Chen, Jie [1 ,2 ,3 ]
Duan, Zhen [1 ,2 ,3 ]
机构
[1] Anhui Univ, Key Lab Intelligent Comp & Signal Proc, Minist Educ, Hefei 230601, Anhui, Peoples R China
[2] Anhui Univ, Sch Comp Sci & Technol, Hefei 230601, Anhui, Peoples R China
[3] Anhui Univ, Informat Mat & Intelligent Sensing Lab Anhui Prov, Hefei 230601, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Closed box; Perturbation methods; Optimization; Glass box; Task analysis; Reinforcement learning; Graph neural networks; Adversarial attack; black-box attack; gradient-based attack; graph neural networks; node classification;
D O I
10.1109/TBDATA.2023.3296936
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to well-designed and imperceptible adversarial attack. Attacks utilizing gradient information are widely used in the field of attack due to their simplicity and efficiency. However, several challenges are faced by gradient-based attacks: 1) Generate perturbations use white-box attacks (i.e., requiring access to the full knowledge of the model), which is not practical in the real world; 2) It is easy to drop into local optima; and 3) The perturbation budget is not limited and might be detected even if the number of modified edges is small. Faced with the above challenges, this article proposes a black-box adversarial attack method, named NAG-R, which consists of two modules known as Nesterov Accelerated Gradient attack module and Rewiring optimization module. Specifically, inspired by adversarial attacks on images, the first module generates perturbations by introducing Nesterov Accelerated Gradient (NAG) to avoid falling into local optima. The second module keeps the fundamental properties of the graph (e.g., the total degree of the graph) unchanged through a rewiring operation, thus ensuring that perturbations are imperceptible. Intensive experiments show that our method has significant attack success and transferability over existing state-of-the-art gradient-based attack methods.
引用
收藏
页码:1586 / 1597
页数:12
相关论文
共 42 条
[21]   Black-box Adversarial Attack Method Based on Evolution Strategy and Attention Mechanism [J].
Huang L.-F. ;
Zhuang W.-Z. ;
Liao Y.-X. ;
Liu N. .
Ruan Jian Xue Bao/Journal of Software, 2021, 32 (11) :3512-3529
[22]   Black-box Adversarial Attack Against Road Sign Recognition Model via PSO [J].
Chen J.-Y. ;
Chen Z.-Q. ;
Zheng H.-B. ;
Shen S.-J. ;
Su M.-M. .
Ruan Jian Xue Bao/Journal of Software, 2020, 31 (09) :2785-2801
[23]   Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation [J].
Liu, Ganlin ;
Huang, Xiaowei ;
Yi, Xinping .
COMPUTER VISION - ECCV 2022, PT V, 2022, 13665 :227-243
[24]   BFS2Adv: Black-box adversarial attack towards hard-to-attack short texts [J].
Han, Xu ;
Li, Qiang ;
Cao, Hongbo ;
Han, Lei ;
Wang, Bin ;
Bao, Xuhua ;
Han, Yufei ;
Wang, Wei .
COMPUTERS & SECURITY, 2024, 141
[25]   RLVS: A Reinforcement Learning-Based Sparse Adversarial Attack Method for Black-Box Video Recognition [J].
Song, Jianxin ;
Yu, Dan ;
Teng, Hongfei ;
Chen, Yongle .
ELECTRONICS, 2025, 14 (02)
[26]   Imperceptible black-box waveform-level adversarial attack towards automatic speaker recognition [J].
Xingyu Zhang ;
Xiongwei Zhang ;
Meng Sun ;
Xia Zou ;
Kejiang Chen ;
Nenghai Yu .
Complex & Intelligent Systems, 2023, 9 :65-79
[27]   Imperceptible black-box waveform-level adversarial attack towards automatic speaker recognition [J].
Zhang, Xingyu ;
Zhang, Xiongwei ;
Sun, Meng ;
Zou, Xia ;
Chen, Kejiang ;
Yu, Nenghai .
COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (01) :65-79
[28]   ROBUST DECISION-BASED BLACK-BOX ADVERSARIAL ATTACK VIA COARSE-TO-FINE RANDOM SEARCH [J].
Kim, Byeong Cheon ;
Yu, Youngjoon ;
Ro, Yong Man .
2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, :3048-3052
[29]   Practical black-box adversarial attack on open-set recognition: Towards robust autonomous driving [J].
Yanfei Wang ;
Kai Zhang ;
Kejie Lu ;
Yun Xiong ;
Mi Wen .
Peer-to-Peer Networking and Applications, 2023, 16 :295-311
[30]   Practical black-box adversarial attack on open-set recognition: Towards robust autonomous driving [J].
Wang, Yanfei ;
Zhang, Kai ;
Lu, Kejie ;
Xiong, Yun ;
Wen, Mi .
PEER-TO-PEER NETWORKING AND APPLICATIONS, 2023, 16 (01) :295-311