A Black-Box Adversarial Attack Method via Nesterov Accelerated Gradient and Rewiring Towards Attacking Graph Neural Networks

被引:5
|
作者
Zhao, Shu [1 ,2 ,3 ]
Wang, Wenyu [1 ,2 ,3 ]
Du, Ziwei [1 ,2 ,3 ]
Chen, Jie [1 ,2 ,3 ]
Duan, Zhen [1 ,2 ,3 ]
机构
[1] Anhui Univ, Key Lab Intelligent Comp & Signal Proc, Minist Educ, Hefei 230601, Anhui, Peoples R China
[2] Anhui Univ, Sch Comp Sci & Technol, Hefei 230601, Anhui, Peoples R China
[3] Anhui Univ, Informat Mat & Intelligent Sensing Lab Anhui Prov, Hefei 230601, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Closed box; Perturbation methods; Optimization; Glass box; Task analysis; Reinforcement learning; Graph neural networks; Adversarial attack; black-box attack; gradient-based attack; graph neural networks; node classification;
D O I
10.1109/TBDATA.2023.3296936
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to well-designed and imperceptible adversarial attack. Attacks utilizing gradient information are widely used in the field of attack due to their simplicity and efficiency. However, several challenges are faced by gradient-based attacks: 1) Generate perturbations use white-box attacks (i.e., requiring access to the full knowledge of the model), which is not practical in the real world; 2) It is easy to drop into local optima; and 3) The perturbation budget is not limited and might be detected even if the number of modified edges is small. Faced with the above challenges, this article proposes a black-box adversarial attack method, named NAG-R, which consists of two modules known as Nesterov Accelerated Gradient attack module and Rewiring optimization module. Specifically, inspired by adversarial attacks on images, the first module generates perturbations by introducing Nesterov Accelerated Gradient (NAG) to avoid falling into local optima. The second module keeps the fundamental properties of the graph (e.g., the total degree of the graph) unchanged through a rewiring operation, thus ensuring that perturbations are imperceptible. Intensive experiments show that our method has significant attack success and transferability over existing state-of-the-art gradient-based attack methods.
引用
收藏
页码:1586 / 1597
页数:12
相关论文
共 38 条
  • [1] Black-Box Adversarial Attack on Graph Neural Networks With Node Voting Mechanism
    Wen, Liangliang
    Liang, Jiye
    Yao, Kaixuan
    Wang, Zhiqiang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (10) : 5025 - 5038
  • [2] Black-box Adversarial Attack and Defense on Graph Neural Networks
    Li, Haoyang
    Di, Shimin
    Li, Zijian
    Chen, Lei
    Cao, Jiannong
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1017 - 1030
  • [3] A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
    Mu, Jiaming
    Wang, Binghui
    Li, Qi
    Sun, Kun
    Xu, Mingwei
    Liu, Zhuotao
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 108 - 125
  • [4] Black-Box Attacks on Graph Neural Networks via White-Box Methods With Performance Guarantees
    Yang, Jielong
    Ding, Rui
    Chen, Jianyu
    Zhong, Xionghu
    Zhao, Huarong
    Xie, Linbo
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (10): : 18193 - 18204
  • [5] Black-Box Adversarial Attack on Graph Neural Networks Based on Node Domain Knowledge
    Sun, Qin
    Yang, Zheng
    Liu, Zhiming
    Zou, Quan
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, KSEM 2023, 2023, 14117 : 203 - 217
  • [6] Boosting Black-Box Attack to Deep Neural Networks With Conditional Diffusion Models
    Liu, Renyang
    Zhou, Wei
    Zhang, Tianwei
    Chen, Kangjie
    Zhao, Jun
    Lam, Kwok-Yan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 5207 - 5219
  • [7] GZOO: Black-Box Node Injection Attack on Graph Neural Networks via Zeroth-Order Optimization
    Yu, Hao
    Liang, Ke
    Hu, Dayu
    Tu, Wenxuan
    Ma, Chuan
    Zhou, Sihang
    Liu, Xinwang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (01) : 319 - 333
  • [8] An Approximated Gradient Sign Method Using Differential Evolution for Black-Box Adversarial Attack
    Li, Chao
    Wang, Handing
    Zhang, Jun
    Yao, Wen
    Jiang, Tingsong
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2022, 26 (05) : 976 - 990
  • [9] Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks
    Huang, Lifeng
    Wei, Shuxin
    Gao, Chengying
    Liu, Ning
    PATTERN RECOGNITION, 2022, 131
  • [10] A CMA-ES-Based Adversarial Attack on Black-Box Deep Neural Networks
    Kuang, Xiaohui
    Liu, Hongyi
    Wang, Ye
    Zhang, Qikun
    Zhang, Quanxin
    Zheng, Jun
    IEEE ACCESS, 2019, 7 : 172938 - 172947