A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

被引:20
|
作者
Mu, Jiaming [1 ,2 ]
Wang, Binghui [3 ]
Li, Qi [1 ,2 ]
Sun, Kun [4 ]
Xu, Mingwei [1 ,2 ]
Liu, Zhuotao [1 ,2 ]
机构
[1] Tsinghua Univ, Inst Network Sci & Cyberspace, Dept Comp Sci, Beijing, Peoples R China
[2] Tsinghua Univ, BNRist, Beijing, Peoples R China
[3] Illinois Inst Technol, Chicago, IL USA
[4] George Mason Univ, Fairfax, VA 22030 USA
来源
CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY | 2021年
基金
国家重点研发计划;
关键词
Black-box adversarial attack; structural perturbation; graph neural networks; graph classification;
D O I
10.1145/3460120.3484796
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in various graph structure related tasks such as node classification and graph classification. However, GNNs are vulnerable to adversarial attacks. Existing works mainly focus on attacking GNNs for node classification; nevertheless, the attacks against GNNs for graph classification have not been well explored. In this work, we conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure. In particular, we focus on the most challenging attack, i.e., hard label black-box attack, where an attacker has no knowledge about the target GNN model and can only obtain predicted labels through querying the target model. To achieve this goal, we formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate. The original optimization problem is intractable to solve, and we relax the optimization problem to be a tractable one, which is solved with theoretical convergence guarantee. We also design a coarse-grained searching algorithm and a query-efficient gradient computation algorithm to decrease the number of queries to the target GNN model. Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations. We also evaluate the effectiveness of our attack under two defenses: one is well-designed adversarial graph detector and the other is that the target GNN model itself is equipped with a defense to prevent adversarial graph generation. Our experimental results show that such defenses are not effective enough, which highlights more advanced defenses.
引用
收藏
页码:108 / 125
页数:18
相关论文
共 50 条
  • [21] Black-Box Graph Backdoor Defense
    Yang, Xiao
    Li, Gaolei
    Tao, Xiaoyi
    Zhang, Chaofeng
    Li, Jianhua
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT V, 2024, 14491 : 163 - 180
  • [22] Single Node Injection Attack against Graph Neural Networks
    Tao, Shuchang
    Cao, Qi
    Shen, Huawei
    Huang, Junjie
    Wu, Yunfan
    Cheng, Xueqi
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 1794 - 1803
  • [23] Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification
    Wang, Xin
    Chang, Heng
    Xie, Beini
    Bian, Tian
    Zhou, Shiji
    Wang, Daixin
    Zhang, Zhiqiang
    Zhu, Wenwu
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (05) : 2166 - 2178
  • [24] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [25] Multi-view Correlation based Black-box Adversarial Attack for 3D Object Detection
    Liu, Bingyu
    Guo, Yuhong
    Jiang, Jianan
    Tang, Jian
    Deng, Weihong
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1036 - 1044
  • [26] A realistic model extraction attack against graph neural networks
    Guan, Faqian
    Zhu, Tianqing
    Tong, Hanjin
    Zhou, Wanlei
    KNOWLEDGE-BASED SYSTEMS, 2024, 300
  • [27] Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
    Wang, Binghui
    Jia, Jinyuan
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1645 - 1653
  • [28] COST AWARE UNTARGETED POISONING ATTACK AGAINST GRAPH NEURAL NETWORKS
    Han, Yuwei
    Lai, Yuni
    Zhu, Yulin
    Zhou, Kai
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 4940 - 4944
  • [29] Black-box Attack against Self-supervised Video Object Segmentation Models with Contrastive Loss
    Chen, Ying
    Yao, Rui
    Zhou, Yong
    Zhao, Jiaqi
    Liu, Bing
    El Saddik, Abdulmotaleb
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (02)
  • [30] Unified Robust Training for Graph Neural Networks Against Label Noise
    Li, Yayong
    Yin, Jie
    Chen, Ling
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2021, PT I, 2021, 12712 : 528 - 540