Structural and positional ensembled encoding for Graph Transformer

被引:2
作者
Yeom, Jeyoon [1 ]
Kim, Taero [1 ]
Chang, Rakwoo [2 ]
Song, Kyungwoo [1 ,3 ]
机构
[1] Yonsei Univ, Dept Stat & Data Sci, Dept Appl Stat, Seoul, South Korea
[2] Univ Seoul, Dept Appl Chem, Seoul, South Korea
[3] Univ Seoul, Dept Artificial Intelligence, Seoul, South Korea
关键词
Graph neural network; Graph Transformer; Positional encoding; Graph clustering; Attention;
D O I
10.1016/j.patrec.2024.05.006
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the Transformer architecture, positional encoding is a vital component because it provides the model with information about the structure and position of data. In Graph Transformer, there have been attempts to introduce different positional encodings and inject additional structural information. Therefore, in terms of integrating positional and structural information, we propose a Structural and Positional Ensembled Graph Transformer (SPEGT). We developed SPEGT by noting the different properties of structural and positional encodings of graphs and the similarity of their computational processes. We have set a unified component that integrates the functionalities: (i) Random Walk Positional Encoding, (ii) Shortest Path Distance between each node, and (iii) Hierarchical Cluster Encoding. We find a problem with a well-known positional encoding and experimentally verify that combining it with other encodings can solve their problem. In addition, SPEGT outperforms previous models on a variety of graph datasets. We also show that SPEGT using unified positional encoding, performs well on structurally indistinguishable graph data through error case analysis.
引用
收藏
页码:104 / 110
页数:7
相关论文
共 51 条
  • [1] Ba J.L., 2016, arXiv, DOI DOI 10.48550/ARXIV.1607.06450
  • [2] Beaini D., 2021, PMLR, P748
  • [3] Bresson X, 2018, Arxiv, DOI [arXiv:1711.07553, DOI 10.48550/ARXIV.1711.07553]
  • [4] Chen DX, 2022, PR MACH LEARN RES
  • [5] Chen PC, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P2974
  • [6] Chenkai Yu, 2020, arXiv
  • [7] GNN-based embedding for clustering scRNA-seq data
    Ciortan, Madalina
    Defrance, Matthieu
    [J]. BIOINFORMATICS, 2022, 38 (04) : 1037 - 1044
  • [8] Corso G, 2020, ADV NEUR IN, V33
  • [9] Cucurull G., 2017, ARXIV
  • [10] ConViT: improving vision transformers with soft convolutional inductive biases
    d'Ascoli, Stephane
    Touvron, Hugo
    Leavitt, Matthew L.
    Morcos, Ari S.
    Biroli, Giulio
    Sagun, Levent
    [J]. JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2022, 2022 (11):