A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

被引:20
|
作者
Mu, Jiaming [1 ,2 ]
Wang, Binghui [3 ]
Li, Qi [1 ,2 ]
Sun, Kun [4 ]
Xu, Mingwei [1 ,2 ]
Liu, Zhuotao [1 ,2 ]
机构
[1] Tsinghua Univ, Inst Network Sci & Cyberspace, Dept Comp Sci, Beijing, Peoples R China
[2] Tsinghua Univ, BNRist, Beijing, Peoples R China
[3] Illinois Inst Technol, Chicago, IL USA
[4] George Mason Univ, Fairfax, VA 22030 USA
来源
CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY | 2021年
基金
国家重点研发计划;
关键词
Black-box adversarial attack; structural perturbation; graph neural networks; graph classification;
D O I
10.1145/3460120.3484796
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in various graph structure related tasks such as node classification and graph classification. However, GNNs are vulnerable to adversarial attacks. Existing works mainly focus on attacking GNNs for node classification; nevertheless, the attacks against GNNs for graph classification have not been well explored. In this work, we conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure. In particular, we focus on the most challenging attack, i.e., hard label black-box attack, where an attacker has no knowledge about the target GNN model and can only obtain predicted labels through querying the target model. To achieve this goal, we formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate. The original optimization problem is intractable to solve, and we relax the optimization problem to be a tractable one, which is solved with theoretical convergence guarantee. We also design a coarse-grained searching algorithm and a query-efficient gradient computation algorithm to decrease the number of queries to the target GNN model. Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations. We also evaluate the effectiveness of our attack under two defenses: one is well-designed adversarial graph detector and the other is that the target GNN model itself is equipped with a defense to prevent adversarial graph generation. Our experimental results show that such defenses are not effective enough, which highlights more advanced defenses.
引用
收藏
页码:108 / 125
页数:18
相关论文
共 50 条
  • [31] GCSA: A New Adversarial Example-Generating Scheme Toward Black-Box Adversarial Attacks
    Fan, Xinxin
    Li, Mengfan
    Zhou, Jia
    Jing, Quanliang
    Lin, Chi
    Lu, Yunfeng
    Bi, Jingping
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 2038 - 2048
  • [32] Fortifying graph neural networks against adversarial attacks via ensemble learning
    Zhou, Chenyu
    Huang, Wei
    Miao, Xinyuan
    Peng, Yabin
    Kong, Xianglong
    Cao, Yi
    Chen, Xi
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [33] Imperceptible graph injection attack on graph neural networks
    Yang Chen
    Zhonglin Ye
    Zhaoyang Wang
    Haixing Zhao
    Complex & Intelligent Systems, 2024, 10 : 869 - 883
  • [34] Imperceptible graph injection attack on graph neural networks
    Chen, Yang
    Ye, Zhonglin
    Wang, Zhaoyang
    Zhao, Haixing
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (01) : 869 - 883
  • [35] Exploratory Adversarial Attacks on Graph Neural Networks
    Lin, Xixun
    Zhou, Chuan
    Yang, Hong
    Wu, Jia
    Wang, Haibo
    Cao, Yanan
    Wang, Bin
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 1136 - 1141
  • [36] Hard-label adversarial attack with dual-granularity optimization on texts
    Qiu, Shilin
    Liu, Qihe
    Zhou, Shijie
    Gou, Min
    Zhang, Zhun
    Wu, Zhewei
    NEUROCOMPUTING, 2025, 634
  • [37] Adversarial Attacks on Graph Neural Networks: Perturbations and their Patterns
    Zuegner, Daniel
    Borchert, Oliver
    Akbarnejad, Amir
    Guennemann, Stephan
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2020, 14 (05)
  • [38] Online adversarial knowledge distillation for graph neural networks
    Wang, Can
    Wang, Zhe
    Chen, Defang
    Zhou, Sheng
    Feng, Yan
    Chen, Chun
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 237
  • [39] Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
    Yan, Wenjie
    Li, Ziqi
    Qi, Yongjun
    CHINESE JOURNAL OF ELECTRONICS, 2024, 33 (03) : 732 - 741
  • [40] Projective Ranking: A Transferable Evasion Attack Method on Graph Neural Networks
    Zhang, He
    Wu, Bang
    Yang, Xiangwen
    Zhou, Chuan
    Wang, Shuo
    Yuan, Xingliang
    Pan, Shirui
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3617 - 3621