Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure

被引:108
作者
Feng, Fuli [1 ]
He, Xiangnan [2 ]
Tang, Jie [3 ]
Chua, Tat-Seng [1 ]
机构
[1] Natl Univ Singapore Comp 1, Sch Comp, Comp Dr, Singapore 117417, Singapore
[2] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Peoples R China
[3] Tsinghua Univ, Beijing 100084, Peoples R China
基金
中国国家自然科学基金; 新加坡国家研究基金会;
关键词
Perturbation methods; Training; Neural networks; Robustness; Predictive models; Task analysis; Standards; Adversarial training; graph-based learning; graph neural networks;
D O I
10.1109/TKDE.2019.2957786
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent efforts show that neural networks are vulnerable to small but intentional perturbations on input features in visual classification tasks. Due to the additional consideration of connections between examples (e.g., articles with citation link tend to be in the same class), graph neural networks could be more sensitive to the perturbations, since the perturbations from connected examples exacerbate the impact on a target example. Adversarial Training (AT), a dynamic regularization technique, can resist the worst-case perturbations on input features and is a promising choice to improve model robustness and generalization. However, existing AT methods focus on standard classification, being less effective when training models on graph since it does not model the impact from connected examples. In this work, we explore adversarial training on graph, aiming to improve the robustness and generalization of models learned on graph. We propose Graph Adversarial Training (GraphAT), which takes the impact from connected examples into account when learning to construct and resist perturbations. We give a general formulation of GraphAT, which can be seen as a dynamic regularization scheme based on the graph structure. To demonstrate the utility of GraphAT, we employ it on a state-of-the-art graph neural network model - Graph Convolutional Network (GCN). We conduct experiments on two citation graphs (Citeseer and Cora) and a knowledge graph (NELL), verifying the effectiveness of GraphAT which outperforms normal training on GCN by 4.51 percent in node classification accuracy. Codes are available via: https://github.com/fulifeng/GraphAT.
引用
收藏
页码:2493 / 2504
页数:12
相关论文
共 53 条
  • [1] Belkin M, 2006, J MACH LEARN RES, V7, P2399
  • [2] Bruna J., 2013, SPECTRAL NETWORKS LO
  • [3] A Survey on Network Embedding
    Cui, Peng
    Wang, Xiao
    Pei, Jian
    Zhu, Wenwu
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2019, 31 (05) : 833 - 852
  • [4] Dai HJ, 2018, PR MACH LEARN RES, V80
  • [5] Dai QY, 2018, AAAI CONF ARTIF INTE, P2167
  • [6] Defferrard M, 2016, ADV NEUR IN, V29
  • [7] Semi-supervised Learning on Graphs with Generative Adversarial Nets
    Ding, Ming
    Tang, Jie
    Zhang, Jie
    [J]. CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, : 913 - 922
  • [8] Feng FL, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P5843
  • [9] Learning on Partial-Order Hypergraphs
    Feng, Fuli
    He, Xiangnan
    Liu, Yiqun
    Nie, Liqiang
    Chua, Tat-Seng
    [J]. WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), 2018, : 1523 - 1532
  • [10] Large-Scale Learnable Graph Convolutional Networks
    Gao, Hongyang
    Wang, Zhengyang
    Ji, Shuiwang
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 1416 - 1424