Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning

被引:0
作者
Wan, Sheng [1 ,2 ]
Pan, Shirui [3 ]
Yang, Jian [1 ,2 ]
Gong, Chen [1 ,2 ,4 ]
机构
[1] Nanjing Univ Sci & Technol, PCA Lab, Key Lab Intelligent Percept & Syst High Dimens In, Minist Educ, Nanjing, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Jiangsu Key Lab Image & Video Understanding Socia, Nanjing, Peoples R China
[3] Monash Univ, Fac IT, Clayton, Vic, Australia
[4] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
来源
THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2021年 / 35卷
关键词
CLASSIFICATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph-based Semi-Supervised Learning (SSL) aims to transfer the labels of a handful of labeled data to the remaining massive unlabeled data via a graph. As one of the most popular graph-based SSL approaches, the recently proposed Graph Convolutional Networks (GCNs) have gained remarkable progress by combining the sound expressiveness of neural networks with graph structure. Nevertheless, the existing graph-based methods do not directly address the core problem of SSL, i.e., the shortage of supervision, and thus their performances are still very limited. To accommodate this issue, a novel GCN-based SSL algorithm is presented in this paper to enrich the supervision signals by utilizing both data similarities and graph structure. Firstly, by designing a semisupervised contrastive loss, improved node representations can be generated via maximizing the agreement between different views of the same data or the data from the same class. Therefore, the rich unlabeled data and the scarce yet valuable labeled data can jointly provide abundant supervision information for learning discriminative node representations, which helps improve the subsequent classification result. Secondly, the underlying determinative relationship between the data features and input graph topology is extracted as supplementary supervision signals for SSL via using a graph generative loss related to the input features. Intensive experimental results on a variety of real-world datasets firmly verify the effectiveness of our algorithm compared with other state-of-the-art methods.
引用
收藏
页码:10049 / 10057
页数:9
相关论文
共 62 条
  • [51] A Comprehensive Survey on Graph Neural Networks
    Wu, Zonghan
    Pan, Shirui
    Chen, Fengwen
    Long, Guodong
    Zhang, Chengqi
    Yu, Philip S.
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (01) : 4 - 24
  • [52] Yan SJ, 2018, AAAI CONF ARTIF INTE, P7444
  • [53] Yang Z, 2016, PR MACH LEARN RES, V48
  • [54] Zhang Tong, 2006, Advances in Neural Information Processing Systems 18, P1601
  • [55] Zhang YX, 2019, AAAI CONF ARTIF INTE, P5829
  • [56] Zhou DY, 2004, ADV NEUR IN, V16, P321
  • [57] Zhou F., 2019, P ADV NEUR INF PROC, P7015
  • [58] Zhu S., 2020, NEURIPS
  • [59] Zhu X., 2003, ICML, P912
  • [60] Zhu X. J., 2005, Time-sensitive Dirichlet process mixture models