Multilevel Contrastive Graph Masked Autoencoders for Unsupervised Graph-Structure Learning

被引:3
作者
Fu, Sichao [1 ]
Peng, Qinmu [1 ]
He, Yang [2 ]
Wang, Xiaorui [2 ]
Zou, Bin [3 ]
Xu, Duanquan [1 ]
Jing, Xiao-Yuan [4 ]
You, Xinge [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[2] Platform Operat & Mkt Ctr, JD Retail, Beijing 100176, Peoples R China
[3] Hubei Univ, Fac Math & Stat, Hubei Key Lab Appl Math, Wuhan 430062, Peoples R China
[4] Wuhan Univ, Sch Comp Sci, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Radio frequency; Training; Optimization; Robustness; Decoding; Data models; Graph neural networks (GNNs); node classification; node clustering; unsupervised graph-structure learning (GSL);
D O I
10.1109/TNNLS.2024.3358801
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised graph-structure learning (GSL) which aims to learn an effective graph structure applied to arbitrary downstream tasks by data itself without any labels' guidance, has recently received increasing attention in various real applications. Although several existing unsupervised GSL has achieved superior performance in different graph analytical tasks, how to utilize the popular graph masked autoencoder to sufficiently acquire effective supervision information from the data itself for improving the effectiveness of learned graph structure has been not effectively explored so far. To tackle the above issue, we present a multilevel contrastive graph masked autoencoder (MCGMAE) for unsupervised GSL. Specifically, we first introduce a graph masked autoencoder with the dual feature masking strategy to reconstruct the same input graph-structured data under the original structure generated by the data itself and learned graph-structure scenarios, respectively. And then, the inter- and intra-class contrastive loss is introduced to maximize the mutual information in feature and graph-structure reconstruction levels simultaneously. More importantly, the above inter- and intra-class contrastive loss is also applied to the graph encoder module for further strengthening their agreement at the feature-encoder level. In comparison to the existing unsupervised GSL, our proposed MCGMAE can effectively improve the training robustness of the unsupervised GSL via different-level supervision information from the data itself. Extensive experiments on three graph analytical tasks and eight datasets validate the effectiveness of the proposed MCGMAE.
引用
收藏
页码:3464 / 3478
页数:15
相关论文
共 56 条
[1]  
[Anonymous], 2007, Tech. Rep.
[2]   Robust Spectral Clustering for Noisy Data Modeling Sparse Corruptions Improves Latent Embeddings [J].
Bojchevski, Aleksandar ;
Matkovic, Yves ;
Guennemann, Stephan .
KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2017, :737-746
[3]  
Cao SS, 2016, AAAI CONF ARTIF INTE, P1145
[4]  
Cao SH, 2022, Arxiv, DOI arXiv:2202.03670
[5]   Problem formulations and solvers in linear SVM: a review [J].
Chauhan, Vinod Kumar ;
Dahiya, Kalpana ;
Sharma, Anuj .
ARTIFICIAL INTELLIGENCE REVIEW, 2019, 52 (02) :803-855
[6]   A tutorial in logistic regression [J].
DeMaris, A .
JOURNAL OF MARRIAGE AND THE FAMILY, 1995, 57 (04) :956-968
[7]  
Elinas P, 2020, ADV NEUR IN, V33
[8]  
Fatemi B, 2021, ADV NEUR IN
[9]   Adaptive graph convolutional collaboration networks for semi-supervised classification [J].
Fu, Sichao ;
Wang, Senlin ;
Liu, Weifeng ;
Liu, Baodi ;
Zhou, Bin ;
You, Xinhua ;
Peng, Qinmu ;
Jing, Xiao-Yuan .
INFORMATION SCIENCES, 2022, 611 :262-276
[10]  
Grill J.-B., 2022, P ADV NEUR INF PROC, V33, P21271