GRD-GNN: Graph Reconstruction Defense for Graph Neural Network

被引:0
作者
Chen J. [1 ,2 ]
Huang G. [2 ]
Zhang D. [2 ]
Zhang X. [3 ]
Ji S. [4 ]
机构
[1] Institute of Cyberspace Security, Zhejiang University of Technology, Hangzhou
[2] College of Information Engineering, Zhejiang University of Technology, Hangzhou
[3] College of Control Science and Engineering, Zhejiang University, Hangzhou
[4] College of Computer Science and Technology, Zhejiang University, Hangzhou
来源
Jisuanji Yanjiu yu Fazhan/Computer Research and Development | 2021年 / 58卷 / 05期
基金
中国国家自然科学基金;
关键词
Adversarial attack; Graph neural network; Graph reconstruction; Graph representation learning; Node classification;
D O I
10.7544/issn1000-1239.2021.20200935
中图分类号
学科分类号
摘要
Recent years, graph neural network (GNN) has been widely applied in our daily life for its satisfying performance in graph representation learning, and such as e-commerce, social media and biology, etc. However, research has suggested that GNNs are vulnerable to adversarial attacks carefully crafted, leading the GNN model to fail. Therefore, it is essential to improve the robustness of graph neural network. Several defense methods have been proposed to improve the robustness of GNNs. However, how to reduce the attack success rate of adversarial attacks while ensuring the performance of the main task of the GNN still remains a challenge. Through the observation of various adversarial samples, it is concluded that the node pairs connected by adversarial edges have characteristics of low structural similarity and low node feature similarity compared with the clean ones. Based on the observation, we propose a graph reconstruction defense for graph neural network named GRD-GNN. Considering both graph structure and node features, both the number of common neighbors and the similarity of nodes are applied to guide the graph reconstruction. GRD-GNN not only removes the adversarial edges, but also adds edges that are beneficial to the performance of the GNN to enhance the graph structure. At last, comprehensive experiments on three real-world datasets verify the art-of-the-state defense performance of proposed GRD-GNN compared with baselines. Additionally, the explanation of the results of experiments and analysis of effectiveness of the method are shown in the paper. © 2021, Science Press. All right reserved.
引用
收藏
页码:1075 / 1091
页数:16
相关论文
共 31 条
[1]  
Tang Jian, Qu Meng, Mei Qiaozhu, Pte: Predictive text embedding through large-scale heterogeneous text networks, Proc of the 21st ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining, pp. 1165-1174, (2015)
[2]  
Chen Jinyin, Zhang Jian, Xu Xuanheng, Et al., E-lstm-d: A deep learning framework for dynamic network link prediction, (2019)
[3]  
Xuan Qi, Wang Jinhuan, Zhao Minghao, Et al., Subgraph networks with application to structural feature space expansion, (2019)
[4]  
Zhao Xia, Zhang Zehua, Zhang Chenwei, Et al., RGNE: A network embedding method for overlapping community detection based on rough granulation, Journal of Computer Research and Development, 57, 6, pp. 1302-1311, (2020)
[5]  
Zugner D, Akbarnejad A, Gunnemann S., Adversarial attacks on neural networks for graph data, Proc of the 24th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining, pp. 2847-2856, (2018)
[6]  
Dai Hanjun, Li Hui, Tian Tian, Et al., Adversarial attack on graph structured data, (2018)
[7]  
Chen Jinyin, Wu Yangyang, Xu Xuanheng, Et al., Fast gradient attack on network embedding, (2018)
[8]  
Wang Xiaoyun, Cheng Mihao, Eaton J, Et al., Attack graph convolutional networks by adding fake nodes, (2018)
[9]  
Sun Yiwei, Wang Suhang, Tang Xianfeng, Et al., Node injection attacks on graphs via reinforcement learning, (2019)
[10]  
Chen Jinyin, Wu Yangyang, Lin Xiang, Et al., Can adversarial network attack be defended?, (2019)