Universal Graph Transformer Self-Attention Networks

被引:24
作者
Dai Quoc Nguyen [1 ]
Tu Dinh Nguyen [2 ]
Dinh Phung [3 ]
机构
[1] Oracle Labs, Brisbane, Qld, Australia
[2] VinAI Res, Hanoi, Vietnam
[3] Monash Univ, Clayton, Vic, Australia
来源
COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION | 2022年
关键词
graph neural networks; graph classification; inductive text classification; graph transformer; unsupervised transductive learning;
D O I
10.1145/3487553.3524258
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce a transformer-based GNN model, named UGformer, to learn graph representations. In particular, we present two UGformer variants, wherein the first variant (publicized in September 2019) is to leverage the transformer on a set of sampled neighbors for each input node, while the second (publicized in May 2021) is to leverage the transformer on all input nodes. Experimental results demonstrate that the first UGformer variant achieves state-of-the-art accuracies on benchmark datasets for graph classification in both inductive setting and unsupervised transductive setting; and the second UGformer variant obtains state-of-the-art accuracies for inductive text classification. The code is available at: https://github.com/daiquocnguyen/Graph-Transformer.
引用
收藏
页码:193 / 196
页数:4
相关论文
共 34 条
[1]  
Chen T, 2020, Arxiv, DOI arXiv:1905.04579
[2]  
Fan RE, 2008, J MACH LEARN RES, V9, P1871
[3]   Structured self-attention architecture for graph-level representation learning [J].
Fan, Xiaolong ;
Gong, Maoguo ;
Xie, Yu ;
Jiang, Fenlong ;
Li, Hao .
PATTERN RECOGNITION, 2020, 100
[4]  
Gilmer J, 2017, PR MACH LEARN RES, V70
[5]  
Hamilton WL, 2017, ADV NEUR IN, V30
[6]  
Ivanov S, 2018, PR MACH LEARN RES, V80
[7]  
Jean S, 2015, PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1, P1
[8]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001
[9]  
Kipf T. N., 2017, P 5 INT C LEARNING R, DOI [https://doi.org/10.48550/arXiv.1609.02907, DOI 10.48550/ARXIV.1609.02907]
[10]  
Hamilton WL, 2018, Arxiv, DOI [arXiv:1709.05584, DOI 10.48550/ARXIV.1709.05584]