Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation

被引:3
|
作者
Liu, Ganlin [1 ]
Huang, Xiaowei [1 ]
Yi, Xinping [1 ]
机构
[1] Univ Liverpool, Liverpool, England
来源
COMPUTER VISION - ECCV 2022, PT V | 2022年 / 13665卷
基金
英国工程与自然科学研究理事会;
关键词
Label poisoning attack; Graph neural networks; Label propagation; Graph convolutional network;
D O I
10.1007/978-3-031-20065-6_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) have achieved outstanding performance in semi-supervised learning tasks with partially labeled graph structured data. However, labeling graph data for training is a challenging task, and inaccurate labels may mislead the training process to erroneous GNN models for node classification. In this paper, we consider label poisoning attacks on training data, where the labels of input data are modified by an adversary before training, to understand to what extent the state-of-the-art GNN models are resistant/vulnerable to such attacks. Specifically, we propose a label poisoning attack framework for graph convolutional networks (GCNs), inspired by the equivalence between label propagation and decoupled GCNs that separate message passing from neural networks. Instead of attacking the entire GCN models, we propose to attack solely label propagation for message passing. It turns out that a gradient-based attack on label propagation is effective and efficient towards the misleading of GCN training. More remarkably, such label attack can be topology-agnostic in the sense that the labels to be attacked can be efficiently chosen without knowing graph structures. Extensive experimental results demonstrate the effectiveness of the proposed method against state-of-the-art GCN-like models.
引用
收藏
页码:227 / 243
页数:17
相关论文
empty
未找到相关数据