Supervised Attention Using Homophily in Graph Neural Networks

被引:0
作者
Chatzianastasis, Michail [1 ]
Nikolentzos, Giannis [1 ]
Vazirgiannis, Michalis [1 ]
机构
[1] IP Paris, Ecole Polytech, LIX, Palaiseau, France
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV | 2023年 / 14257卷
关键词
Graph Neural Networks; Graph Attention Networks; Supervised Attention; CLASSIFICATION;
D O I
10.1007/978-3-031-44216-2_47
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks have become the standard approach for dealing with learning problems on graphs. Among the different variants of graph neural networks, graph attention networks (GATs) have been applied with great success to different tasks. In the GAT model, each node assigns an importance score to its neighbors using an attention mechanism. However, similar to other graph neural networks, GATs aggregate messages from nodes that belong to different classes, and therefore produce node representations that are not well separated with respect to the different classes, which might hurt their performance. In this work, to alleviate this problem, we propose a new technique that can be incorporated into any graph attention model to encourage higher attention scores between nodes that share the same class label. We evaluate the proposed method on several node classification datasets demonstrating increased performance over standard baseline models.
引用
收藏
页码:576 / 586
页数:11
相关论文
共 38 条
[1]  
Abu-El-Haifa S, 2019, PR MACH LEARN RES, V97
[2]  
ANDERSON R M, 1991
[3]  
[Anonymous], 2014, 2017 IEEE INT C CONS
[4]  
Brody S., 2022, 10 INT C LEARN REPR
[5]  
Cai C, 2020, Arxiv, DOI [arXiv:2006.13318, DOI 10.48550/ARXIV.2006.13318]
[6]  
Chami I, 2019, ADV NEUR IN, V32
[7]  
Chatzianastasis M, 2023, AAAI CONF ARTIF INTE, P7006
[8]  
Chen DL, 2020, AAAI CONF ARTIF INTE, V34, P3438
[9]  
Corso G, 2020, ADV NEUR IN, V33
[10]  
Cranmer M., 2020, Advances in Neural Information Processing Systems, V33, P17429