Unsupervised graph-level representation learning with hierarchical contrasts

被引:44
作者
Ju, Wei [1 ]
Gu, Yiyang [1 ]
Luo, Xiao [2 ]
Wang, Yifan [1 ]
Yuan, Haochen [1 ]
Zhong, Huasong [3 ]
Zhang, Ming [1 ]
机构
[1] Peking Univ, Sch Comp Sci, Beijing 100871, Peoples R China
[2] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90095 USA
[3] Alibaba Grp, Hangzhou 311100, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph representation learning; Graph contrastive learning; Graph neural networks; Unsupervised learning;
D O I
10.1016/j.neunet.2022.11.019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised graph-level representation learning has recently shown great potential in a variety of domains, ranging from bioinformatics to social networks. Plenty of graph contrastive learning methods have been proposed to generate discriminative graph-level representations recently. They typically design multiple types of graph augmentations and enforce a graph to have consistent representations under different views. However, these techniques mostly neglect the intrinsic hierarchical structure of the graph, resulting in a limited exploration of semantic information for graph representation. Moreover, they often rely on a large number of negative samples to prevent collapsing into trivial solutions, while a great need for negative samples may lead to memory issues during optimization in graph domains. To address the two issues, this paper develops an unsupervised graph-level representation learning framework named Hierarchical Graph Contrastive Learning (HGCL), which investigates the hierarchical structural semantics of a graph at both node and graph levels. Specifically, our HGCL consists of three parts, i.e., node-level contrastive learning, graph-level contrastive learning, and mutual contrastive learning to capture graph semantics hierarchically. Furthermore, the Siamese network and momentum update are further involved to release the demand for excessive negative samples. Finally, the experimental results on both benchmark datasets for graph classification and large-scale OGB datasets for transfer learning demonstrate that our proposed HGCL significantly outperforms a broad range of state-of-the-art baselines.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页码:359 / 368
页数:10
相关论文
共 60 条
[1]   Exploiting dynamic spatio-temporal graph convolutional neural networks for citywide traffic flows prediction [J].
Ali, Ahmad ;
Zhu, Yanmin ;
Zakarya, Muhammad .
NEURAL NETWORKS, 2022, 145 :233-247
[2]  
Baek Jinheon, 2021, ICLR
[3]   Shortest-path kernels on graphs [J].
Borgwardt, KM ;
Kriegel, HP .
Fifth IEEE International Conference on Data Mining, Proceedings, 2005, :74-81
[4]  
Bromley J., 1993, International Journal of Pattern Recognition and Artificial Intelligence, V7, P669, DOI 10.1142/S0218001493000339
[5]   Discrete particle swarm optimization for identifying community structures in signed social networks [J].
Cai, Qing ;
Gong, Maoguo ;
Shen, Bo ;
Ma, Lijia ;
Jiao, Licheng .
NEURAL NETWORKS, 2014, 58 :4-13
[6]   LIBSVM: A Library for Support Vector Machines [J].
Chang, Chih-Chung ;
Lin, Chih-Jen .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
[7]   Comparison of Random Forest and Pipeline Pilot Naive Bayes in Prospective QSAR Predictions [J].
Chen, Bin ;
Sheridan, Robert P. ;
Hornak, Viktor ;
Voigt, Johannes H. .
JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2012, 52 (03) :792-803
[8]  
Chen Ting, 2020, ICML
[9]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753
[10]  
Chu Guanyi, 2021, IJCAI