Hierarchical Contrastive Learning Enhanced Heterogeneous Graph Neural Network

被引:5
作者
Liu N. [1 ]
Wang X. [1 ]
Han H. [1 ]
Shi C. [1 ]
机构
[1] Beijing University of Posts and Telecommunications, Beijing Key Lab of Intelligent Telecommunications Software and Multimedia, Beijing
基金
中国国家自然科学基金;
关键词
Contrastive learning; heterogeneous graph neural network; heterogeneous information network;
D O I
10.1109/TKDE.2023.3264691
中图分类号
学科分类号
摘要
Heterogeneous graph neural networks (HGNNs) as an emerging technique have shown superior capacity of dealing with heterogeneous information network (HIN). However, most HGNNs follow a semi-supervised learning manner, which notably limits their wide use in reality since labels are usually scarce in real applications. Recently, contrastive learning, a self-supervised method, becomes one of the most exciting learning paradigms and shows great potential when there are no labels. In this paper, we study the problem of self-supervised HGNNs and propose a novel co-contrastive learning mechanism for HGNNs, named HeCo. Different from traditional contrastive learning which only focuses on contrasting positive and negative samples, HeCo employs cross-view contrastive mechanism. Specifically, two views of a HIN (network schema and meta-path views) are proposed to learn node embeddings, so as to capture both of local and high-order structures simultaneously. Then the cross-view contrastive learning, as well as a view mask mechanism, is proposed, which is able to extract the positive and negative embeddings from two views. This enables the two views to collaboratively supervise each other and finally learn high-level node embeddings. Moreover, to further boost the performance of HeCo, two additional methods are designed to generate harder negative samples with high quality. The essence of HeCo is to make positive samples from different views close to each other by cross-view contrast, and learn the factors invariant to two proposed views. However, besides the invariant factors, view-specific factors complementally provide the diverse structure information between different nodes, which also should be contained into the final embeddings. Therefore, we need to further explore each view independently and propose a modified model, called HeCo++. Specifically, HeCo++ conducts hierarchical contrastive learning, including cross-view and intra-view contrasts, which aims to enhance the mining of respective structures. Extensive experiments conducted on a variety of real-world networks show the superior performance of the proposed methods over the state-of-the-arts. © 1989-2012 IEEE.
引用
收藏
页码:10884 / 10896
页数:12
相关论文
共 46 条
[1]  
Sun Y., Han J., Mining heterogeneous information networks: A structural analysis approach, ACM SIGKDD Explorations Newslett., 14, pp. 20-28, (2012)
[2]  
Hu W., Et al., Strategies for pre-training graph neural networks, Proc. Int. Conf. Learn. Representations, pp. 327-342, (2020)
[3]  
Davis A.P., Et al., The comparative toxicogenomics database: Update 2017, Nucleic Acids Res., 45, pp. D972-D978, (2017)
[4]  
Fan S., Et al., Metapath-guided heterogeneous graph neural network for intent recommendation, Proc. ACMSIGKDD Int. Conf. Knowl. Discov. Data Mining, pp. 2478-2486, (2019)
[5]  
Fan Y., Hou S., Zhang Y., Ye Y., Abdulhayoglu M., Gotcha-Sly Malware!: Scorpion A metagraph2vec based malware detection system, Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, pp. 253-262, (2018)
[6]  
Liu X., Et al., Self-supervised learning: Generative or contrastive, (2020)
[7]  
He K., Fan H., Wu Y., Xie S., Girshick R.B., Momentum contrast for unsupervised visual representation learning, Proc. IEEE Conf.Comput. Vis. Pattern Recognit., pp. 9726-9735, (2020)
[8]  
Chen T., Kornblith S., Norouzi M., Hinton G.E., A simple framework for contrastive learning of visual representations, Proc. Int.Conf.Mach. Learn., pp. 1597-1607, (2020)
[9]  
Velickovic P., Fedus W., Hamilton W.L., Lio P., Bengio Y., Hjelm R.D., Deep graph infomax, Proc. Int. Conf. Learn. Representations, pp. 993-1006, (2019)
[10]  
Hassani K., Ahmadi A.H.K., Contrastive multi-view representation learning on graphs, Proc. Int. Conf. Mach. Learn., pp. 4116-4126, (2020)