Towards Query-limited Adversarial Attacks on Graph Neural Networks

被引:3
作者
Li, Haoran [1 ]
Zhang, Jinhong [1 ]
Gao, Song [2 ]
Wu, Liwen [2 ]
Zhou, Wei [2 ]
Wang, Ruxin [3 ]
机构
[1] Yunnan Univ, Engn Res Ctr Cyberspace, Kunming, Yunnan, Peoples R China
[2] Yunnan Univ, Natl Pilot Sch Software, Kunming, Yunnan, Peoples R China
[3] Alibaba Grp, Beijing, Peoples R China
来源
2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI | 2022年
基金
中国国家自然科学基金;
关键词
Adversarial Attack; Graph Neural Network; Graph Representation Learning;
D O I
10.1109/ICTAI56018.2022.00082
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Network (GNN) is a graph representation learning approach for graph-structured data, which has witnessed a remarkable progress in the past few years. As a counterpart, the robustness of such a model has also received considerable attention. Previous studies show that the performance of a well-trained GNN can be faded by blackbox adversarial examples significantly. In practice, the attacker can only query the target model with very limited counts, yet the existing methods require hundreds of thousand queries to extend attacks, leading the attacker to be exposed easily. To perform a step forward in addressing this issue, in this paper, we propose a novel attack methods, namely Graph Query-limited Attack (GQA), in which we generate adversarial examples on the surrogate model to fool the target model. Specifically, in GQA, we use contrastive learning to fit the feature extraction layers of the surrogate model in a query-free manner, which can reduce the need of queries. Furthermore, in order to utilize query results sufficiently, we obtain a series of queries with rich information by changing the input iteratively, and storing them in a buffer for recycling usage. Experiments show that GQA can decrease the accuracy of the target model by 4.8%, with only 1% edges modified and 100 queries performed.
引用
收藏
页码:516 / 521
页数:6
相关论文
共 18 条
[1]  
[Anonymous], 2016, arXiv
[2]  
Chung F. R., 1997, Spectral graph theory, V92, DOI 10.1090/cbms/092
[3]  
Dai HJ, 2018, PR MACH LEARN RES, V80
[4]  
Defferrard M, 2016, ADV NEUR IN, V29
[5]   Malicious Transaction Identification in Digital Currency via Federated Graph Deep Learning [J].
Du, Hanbiao ;
Shen, Meng ;
Sun, Rungeng ;
Jia, Jizhe ;
Zhu, Liehuang ;
Zhai, Yanlong .
IEEE INFOCOM 2022 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2022,
[6]  
Inkawhich N., 2020, ADV NEURAL INFORM PR, V33, DOI 10.48550
[7]  
Kipf T. N., 2017, P 5 INT C LEARNING R, DOI [https://doi.org/10.48550/arXiv.1609.02907, DOI 10.48550/ARXIV.1609.02907]
[8]  
Lai S., 2022, SIGIR
[9]  
Li YX, 2021, AAAI CONF ARTIF INTE, V35, P16078
[10]  
Liu YP, 2017, Arxiv, DOI arXiv:1611.02770