A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

被引:20
|
作者
Mu, Jiaming [1 ,2 ]
Wang, Binghui [3 ]
Li, Qi [1 ,2 ]
Sun, Kun [4 ]
Xu, Mingwei [1 ,2 ]
Liu, Zhuotao [1 ,2 ]
机构
[1] Tsinghua Univ, Inst Network Sci & Cyberspace, Dept Comp Sci, Beijing, Peoples R China
[2] Tsinghua Univ, BNRist, Beijing, Peoples R China
[3] Illinois Inst Technol, Chicago, IL USA
[4] George Mason Univ, Fairfax, VA 22030 USA
来源
CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY | 2021年
基金
国家重点研发计划;
关键词
Black-box adversarial attack; structural perturbation; graph neural networks; graph classification;
D O I
10.1145/3460120.3484796
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in various graph structure related tasks such as node classification and graph classification. However, GNNs are vulnerable to adversarial attacks. Existing works mainly focus on attacking GNNs for node classification; nevertheless, the attacks against GNNs for graph classification have not been well explored. In this work, we conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure. In particular, we focus on the most challenging attack, i.e., hard label black-box attack, where an attacker has no knowledge about the target GNN model and can only obtain predicted labels through querying the target model. To achieve this goal, we formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate. The original optimization problem is intractable to solve, and we relax the optimization problem to be a tractable one, which is solved with theoretical convergence guarantee. We also design a coarse-grained searching algorithm and a query-efficient gradient computation algorithm to decrease the number of queries to the target GNN model. Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations. We also evaluate the effectiveness of our attack under two defenses: one is well-designed adversarial graph detector and the other is that the target GNN model itself is equipped with a defense to prevent adversarial graph generation. Our experimental results show that such defenses are not effective enough, which highlights more advanced defenses.
引用
收藏
页码:108 / 125
页数:18
相关论文
共 50 条
  • [41] A semantic backdoor attack against graph convolutional networks
    Dai, Jiazhu
    Xiong, Zhipeng
    Cao, Chenhong
    NEUROCOMPUTING, 2024, 600
  • [42] Evaluation of Four Black-box Adversarial Attacks and Some Query-efficient Improvement Analysis
    Wang, Rui
    2022 PROGNOSTICS AND HEALTH MANAGEMENT CONFERENCE, PHM-LONDON 2022, 2022, : 298 - 302
  • [43] Semantics-Preserving Reinforcement Learning Attack Against Graph Neural Networks for Malware Detection
    Zhang, Lan
    Liu, Peng
    Choi, Yoon-Ho
    Chen, Ping
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (02) : 1390 - 1402
  • [44] A Causality-Aligned Structure Rationalization Scheme Against Adversarial Biased Perturbations for Graph Neural Networks
    Jia, Ju
    Ma, Siqi
    Liu, Yang
    Wang, Lina
    Deng, Robert H.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 59 - 73
  • [45] UnboundAttack: Generating Unbounded Adversarial Attacks to Graph Neural Networks
    Ennadir, Sofiane
    Alkhatib, Amr
    Nikolentzos, Giannis
    Vazirgiannis, Michalis
    Bostrom, Henrik
    COMPLEX NETWORKS & THEIR APPLICATIONS XII, VOL 1, COMPLEX NETWORKS 2023, 2024, 1141 : 100 - 111
  • [46] Two-level adversarial attacks for graph neural networks
    Song, Chengxi
    Niu, Lingfeng
    Lei, Minglong
    INFORMATION SCIENCES, 2024, 654
  • [47] Domain-Adversarial Graph Neural Networks for Text Classification
    Wu, Man
    Pan, Shirui
    Zhu, Xingquan
    Zhou, Chuan
    Pan, Lei
    2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, : 648 - 657
  • [48] MGAAttack: Toward More Query-efficient Black-box Attack by Microbial Genetic Algorithm
    Wang, Lina
    Yang, Kang
    Wang, Wenqi
    Wang, Run
    Ye, Aoshuang
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 2229 - 2236
  • [49] Integrating Multi-Label Contrastive Learning With Dual Adversarial Graph Neural Networks for Cross-Modal Retrieval
    Qian, Shengsheng
    Xue, Dizhan
    Fang, Quan
    Xu, Changsheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 4794 - 4811
  • [50] Adversarial Robustness in Graph Neural Networks: Recent Advances and New Frontier
    Hou, Zhichao
    Lin, Minhua
    Torkamani, MohamadAli
    Wang, Suhang
    Liu, Xiaorui
    2024 IEEE 11TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS, DSAA 2024, 2024, : 433 - 434