Heterogeneous Information Network Embedding With Adversarial Disentangler

被引:14
作者
Wang, Ruijia [1 ]
Shi, Chuan [1 ]
Zhao, Tianyu [1 ]
Wang, Xiao [1 ]
Ye, Yanfang [2 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing Key Lab Intelligent Telecommun Software &, Beijing 100876, Peoples R China
[2] Case Western Reserve Univ, Dept Comp & Data Sci, Cleveland, OH 44106 USA
基金
中国国家自然科学基金;
关键词
Semantics; Generators; Task analysis; Toy manufacturing industry; Correlation; Telecommunications; Robustness; Heterogeneous information networks; network embedding; representation disentanglement; REPRESENTATION;
D O I
10.1109/TKDE.2021.3096231
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Heterogeneous information network (HIN) embedding has gained considerable attention in recent years, which learns low-dimensional representation of nodes while preserving the semantic and structural correlations in HINs. Many of existing methods which exploit meta-path guided strategy have shown promising results. However, the learned node representations could be highly entangled for downstream tasks; for example, an author's publications in multidisciplinary venues may make the prediction of his/her research interests difficult. To address this issue, we develop a novel framework named HEAD (i.e., HIN Embedding with Adversarial Disentangler) to separate the distinct, informative factors of variations in node semantics formulated by meta-paths. More specifically, in HEAD, we first propose the meta-path disentangler to separate node embeddings from various meta-paths into intrinsic and specific spaces; then with meta-path schemes as self-supervised information, we design two adversarial learners (i.e., meta-path and semantic discriminators) to make the intrinsic embedding more independent from the designed meta-paths while the specific embedding more meta-path dependent. To comprehensively evaluate the performance of HEAD, we perform a set of experiments on four real-world datasets. Compared to the state-of-the-art baselines, the maximum 15 percent improvement of performance demonstrates the effectiveness of HEAD and the benefits of the learned disentangled representations.
引用
收藏
页码:1581 / 1593
页数:13
相关论文
共 63 条
[1]  
Alemi A. A., 2017, PROC 5 INT C LEARN R
[2]  
[Anonymous], 2005, WWW 05
[3]  
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, 10.48550/arXiv.1409.0473,1409.0473, DOI 10.48550/ARXIV.1409.0473,1409.0473]
[4]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[5]   A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications [J].
Cai, HongYun ;
Zheng, Vincent W. ;
Chang, Kevin Chen-Chuan .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2018, 30 (09) :1616-1637
[6]  
Cao S., 2015, P 24 ACM INT C INF K, P891, DOI 10.1145/2806416.2806512
[7]   PME: Projected Metric Embedding on Heterogeneous Networks for Link Prediction [J].
Chen, Hongxu ;
Yin, Hongzhi ;
Wang, Weiqing ;
Wang, Hao ;
Quoc Viet Hung Nguyen ;
Li, Xue .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :1177-1186
[8]  
Chen X, 2016, ADV NEUR IN, V29
[9]   A Survey on Network Embedding [J].
Cui, Peng ;
Wang, Xiao ;
Pei, Jian ;
Zhu, Wenwu .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2019, 31 (05) :833-852
[10]  
Defferrard M, 2016, ADV NEUR IN, V29