Learning Disentangled Graph Convolutional Networks Locally and Globally

被引:14
作者
Guo, Jingwei [1 ]
Huang, Kaizhu [2 ]
Yi, Xinping [1 ]
Zhang, Rui [3 ]
机构
[1] Univ Liverpool, Dept Elect Engn & Elect, Liverpool L69 3BX, Merseyside, England
[2] Duke Kunshan Univ, Dept Elect & Comp Engn, Suzhou 215316, Peoples R China
[3] Xian Jiaotong Liverpool Univ, Dept Fdn Math, Suzhou 215123, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Routing; Message passing; Task analysis; Representation learning; Image color analysis; Correlation; (Semi-)supervised node classification; disentangled representation learning; graph convolutional networks (GCNs); local and global learning; REPRESENTATION;
D O I
10.1109/TNNLS.2022.3195336
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph convolutional networks (GCNs) emerge as the most successful learning models for graph-structured data. Despite their success, existing GCNs usually ignore the entangled latent factors typically arising in real-world graphs, which results in nonexplainable node representations. Even worse, while the emphasis has been placed on local graph information, the global knowledge of the entire graph is lost to a certain extent. In this work, to address these issues, we propose a novel framework for GCNs, termed LGD-GCN, taking advantage of both local and global information for disentangling node representations in the latent space. Specifically, we propose to represent a disentangled latent continuous space with a statistical mixture model, by leveraging neighborhood routing mechanism locally. From the latent space, various new graphs can then be disentangled and learned, to overall reflect the hidden structures with respect to different factors. On the one hand, a novel regularizer is designed to encourage interfactor diversity for model expressivity in the latent space. On the other hand, the factor-specific information is encoded globally via employing a message passing along these new graphs, in order to strengthen intrafactor consistency. Extensive evaluations on both synthetic and five benchmark datasets show that LGD-GCN brings significant performance gains over the recent competitive models in both disentangling and node classification. Particularly, LGD-GCN is able to outperform averagely the disentangled state-of-the-arts by 7.4% on social network datasets.
引用
收藏
页码:3640 / 3651
页数:12
相关论文
共 72 条
[1]  
Alemi AA, 2019, Arxiv, DOI arXiv:1612.00410
[2]   Optuna: A Next-generation Hyperparameter Optimization Framework [J].
Akiba, Takuya ;
Sano, Shotaro ;
Yanase, Toshihiko ;
Ohta, Takeru ;
Koyama, Masanori .
KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, :2623-2631
[3]  
[Anonymous], 1996, CRGTR961 U TOR DEP C
[4]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[5]   CONSISTENT MANIFOLD REPRESENTATION FOR TOPOLOGICAL DATA ANALYSIS [J].
Berry, Tyrus ;
Sauer, Timothy .
FOUNDATIONS OF DATA SCIENCE, 2019, 1 (01) :1-38
[6]  
Bo DY, 2021, AAAI CONF ARTIF INTE, V35, P3950
[7]  
Chen J, 2018, Arxiv, DOI arXiv:1801.10247
[8]   Handling Information Loss of Graph Neural Networks for Session-based Recommendation [J].
Chen, Tianwen ;
Wong, Raymond Chi-Wing .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :1172-1180
[9]  
Chen X, 2016, 30 C NEURAL INFORM P, V29
[10]  
Chen X, 2017, Arxiv, DOI arXiv:1611.02731