Self-Attentive Attributed Network Embedding Through Adversarial Learning

被引:9
作者
Yu, Wenchao [1 ]
Cheng, Wei [1 ]
Aggarwal, Charu [2 ]
Zong, Bo [1 ]
Chen, Haifeng [1 ]
Wang, Wei [3 ]
机构
[1] NEC Labs Amer Inc, Princeton, NJ 08540 USA
[2] IBM Res AI, Yorktown Hts, NY USA
[3] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90024 USA
来源
2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019) | 2019年
关键词
network embedding; attributed network; deep embedding; generative adversarial networks; self-attention;
D O I
10.1109/ICDM.2019.00086
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Network embedding aims to learn the low-dimensional representations/embeddings of vertices which preserve the structure and inherent properties of the networks. The resultant embeddings are beneficial to downstream tasks such as vertex classification and link prediction. A vast majority of real-world networks are coupled with a rich set of vertex attributes, which could be potentially complementary in learning better embeddings. Existing attributed network embedding models, with shallow or deep architectures, typically seek to match the representations in topology space and attribute space for each individual vertex by assuming that the samples from the two spaces are drawn uniformly. The assumption, however, can hardly be guaranteed in practice. Due to the intrinsic sparsity of sampled vertex sequences and incompleteness in vertex attributes, the discrepancy between the attribute space and the network topology space inevitably exists. Furthermore, the interactions among vertex attributes, a.k.a cross features, have been largely ignored by existing approaches. To address the above issues, in this paper, we propose NETTENTION, a self-attentive network embedding approach that can efficiently learn vertex embeddings on attributed network. Instead of sample-wise optimization, NETTENTION aggregates the two types of information through minimizing the difference between the representation distributions in the low-dimensional topology and attribute spaces. The joint inference is encapsulated in a generative adversarial training process, yielding better generalization performance and robustness. The learned distributions consider both locality-preserving and global reconstruction constraints which can be inferred from the learning of the adversarially regularized autoencoders. Additionally, a multi-head self-attention module is developed to explicitly model the attribute interactions. Extensive experiments on benchmark datasets have verified the effectiveness of the proposed NETTENTION model on a variety of tasks, including vertex classification and link prediction.
引用
收藏
页码:758 / 767
页数:10
相关论文
共 36 条
  • [1] [Anonymous], 2017, ARXIV170604223
  • [2] [Anonymous], 2014, 20 ACM SIGKDD INT C, DOI DOI 10.1145/2623330.2623732
  • [3] Arjovsky M, 2017, PR MACH LEARN RES, V70
  • [4] Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
  • [5] Bojchevski Aleksandar, 2018, INT C LEARN REPR
  • [6] Bousquet Olivier, 2017, ARXIV170507642
  • [7] XGBoost: A Scalable Tree Boosting System
    Chen, Tianqi
    Guestrin, Carlos
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 785 - 794
  • [8] Dai Q, 2017, ARXIV171107838
  • [9] Gao HC, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P3364
  • [10] Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1