Self-supervised Graph-level Representation Learning with Adversarial Contrastive Learning

被引:45
作者
Luo, Xiao [1 ]
Ju, Wei [2 ]
Gu, Yiyang [2 ]
Mao, Zhengyang [2 ]
Liu, Luchen [2 ]
Yuan, Yuhui [3 ]
Zhang, Ming [2 ]
机构
[1] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90095 USA
[2] Peking Univ, Sch Comp Sci, Beijing 100871, Peoples R China
[3] Microsoft Res Asia, Beijing, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Graph representation learning; graph neural networks; contrastive learning; self-supervised learning; PREDICTION;
D O I
10.1145/3624018
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The recently developed unsupervised graph representation learning approaches apply contrastive learning into graph-structured data and achieve promising performance. However, these methods mainly focus on graph augmentation for positive samples, while the negative mining strategies for graph contrastive learning are less explored, leading to sub-optimal performance. To tackle this issue, we propose a Graph Adversarial Contrastive Learning (GraphACL) scheme that learns a bank of negative samples for effective self-supervised whole-graph representation learning. Our GraphACL consists of (i) a graph encoding branch that generates the representations of positive samples and (ii) an adversarial generation branch that produces a bank of negative samples. To generate more powerful hard negative samples, our method minimizes the contrastive loss during encoding updating while maximizing the contrastive loss adversarially over the negative samples for providing the challenging contrastive task. Moreover, the quality of representations produced by the adversarial generation branch is enhanced through the regularization of carefully designed bank divergence loss and bank orthogonality loss. We optimize the parameters of the graph encoding branch and adversarial generation branch alternately. Extensive experiments on 14 real-world benchmarks on both graph classification and transfer learning tasks demonstrate the effectiveness of the proposed approach over existing graph self-supervised representation learning methods.
引用
收藏
页数:23
相关论文
共 90 条
[1]   Sub2Vec: Feature Learning for Subgraphs [J].
Adhikari, Bijaya ;
Zhang, Yao ;
Ramakrishnan, Naren ;
Prakash, B. Aditya .
ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2018, PT II, 2018, 10938 :170-182
[2]  
Arjovsky M., 2017, ICLR
[3]   Shortest-path kernels on graphs [J].
Borgwardt, KM ;
Kriegel, HP .
Fifth IEEE International Conference on Data Mining, Proceedings, 2005, :74-81
[4]   Protein function prediction via graph kernels [J].
Borgwardt, KM ;
Ong, CS ;
Schönauer, S ;
Vishwanathan, SVN ;
Smola, AJ ;
Kriegel, HP .
BIOINFORMATICS, 2005, 21 :I47-I56
[5]   LIBSVM: A Library for Support Vector Machines [J].
Chang, Chih-Chung ;
Lin, Chih-Jen .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
[6]   Comparison of Random Forest and Pipeline Pilot Naive Bayes in Prospective QSAR Predictions [J].
Chen, Bin ;
Sheridan, Robert P. ;
Hornak, Viktor ;
Voigt, Johannes H. .
JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2012, 52 (03) :792-803
[7]  
Chen T, 2020, PR MACH LEARN RES, V119
[8]  
Chu Guanyi, 2021, IJCAI
[9]   Semi-supervised Learning on Graphs with Generative Adversarial Nets [J].
Ding, Ming ;
Tang, Jie ;
Zhang, Jie .
CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, :913-922
[10]   Distinguishing enzyme structures from non-enzymes without alignments [J].
Dobson, PD ;
Doig, AJ .
JOURNAL OF MOLECULAR BIOLOGY, 2003, 330 (04) :771-783