WGCN: Graph Convolutional Networks with Weighted Structural Features

被引:26
作者
Zhao, Yunxiang [1 ]
Qi, Jianzhong [1 ]
Liu, Qingwei [1 ]
Zh, Rui [2 ]
机构
[1] Univ Melbourne, Melbourne, Vic, Australia
[2] Www Ruizhang Info, Melbourne, Vic, Australia
来源
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL | 2021年
基金
澳大利亚研究理事会;
关键词
Directional Graph Convolutional Networks; Structural Information; Random Walk with Restart;
D O I
10.1145/3404835.3462834
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph structural information such as topologies or connectivities provides valuable guidance for graph convolutional networks (GCNs) to learn nodes' representations. Existing GCN models that capture nodes' structural information weight in- and out-neighbors equally or differentiate in- and out-neighbors globally without considering nodes' local topologies. We observe that in- and out-neighbors contribute differently for nodes with different local topologies. To explore the directional structural information for different nodes, we propose a GCN model with weighted structural features, named WGCN. WGCN first captures nodes' structural fingerprints via a direction and degree aware Random Walk with Restart algorithm, where the walk is guided by both edge direction and nodes' in- and out-degrees. Then, the interactions between nodes' structural fingerprints are used as the weighted node structural features. To further capture nodes' high-order dependencies and graph geometry, WGCN embeds graphs into a latent space to obtain nodes' latent neighbors and geometrical relationships. Based on nodes' geometrical relationships in the latent space, WGCN differentiates latent, in-, and out-neighbors with an attention-based geometrical aggregation. Experiments on transductive node classification tasks show that WGCN outperforms the baseline models consistently by up to 17.07% in terms of accuracy on five benchmark datasets.
引用
收藏
页码:624 / 633
页数:10
相关论文
共 54 条
  • [1] [Anonymous], 2013, INT C LEARNING REPRE
  • [2] Bojchevski A., 2018, 6 INT C LEARN REPR I
  • [3] Chen M, 2020, PR MACH LEARN RES, V119
  • [4] Defferrard M, 2016, ADV NEUR IN, V29
  • [5] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [6] Hamilton WL, 2017, ADV NEUR IN, V30
  • [7] Latent space approaches to social network analysis
    Hoff, PD
    Raftery, AE
    Handcock, MS
    [J]. JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2002, 97 (460) : 1090 - 1098
  • [8] Hu Fenyu, 2019, IJCAI
  • [9] Hu W., 2019, 7 INT C LEARN REPR I, DOI DOI 10.48550/ARXIV.1810.00826
  • [10] Node Representation Learning for Directed Graphs
    Khosla, Megha
    Leonhardt, Jurek
    Nejdl, Wolfgang
    Anand, Avishek
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT I, 2020, 11906 : 395 - 411