CoRGi: Content-Rich Graph Neural Networks with Attention

被引:5
作者
Kim, Jooyeon [1 ]
Lamb, Angus [2 ]
Woodhead, Simon [3 ]
Jones, Simon Peyton [4 ]
Zhang, Cheng [5 ]
Allamanis, Miltiadis [6 ]
机构
[1] RIKEN, Wako, Saitama, Japan
[2] G Res, London, England
[3] Eedi, London, England
[4] Epic Games, Cary, NC USA
[5] Microsoft, Redmond, WA USA
[6] Microsoft Res, Mountain View, CA USA
来源
PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022 | 2022年
关键词
missing value imputation; recommendation; graph neural networks; neural networks;
D O I
10.1145/3534678.3539306
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph representations of a target domain often project it to a set of entities (nodes) and their relations (edges). However, such projections often miss important and rich information. For example, in graph representations used in missing value imputation, items - represented as nodes - may contain rich textual information. However, when processing graphs with graph neural networks (GNN), such information is either ignored or summarized into a single vector representation used to initialize the GNN. Towards addressing this, we present CoRGi, a GNN that considers the rich data within nodes in the context of their neighbors. This is achieved by endowing CoRGi's message passing with a personalized attention mechanism over the content of each node. This way, CoRGi assigns user-item-specific attention scores with respect to the words that appear in an item's content. We evaluate CoRGi on two edge-value prediction tasks and show that CoRGi is better at making edge-value predictions over existing methods, especially on sparse regions of the graph.
引用
收藏
页码:773 / 783
页数:11
相关论文
共 57 条
[1]  
[Anonymous], 2013, P 7 ACM C RECOMMENDE
[2]  
Bennett J., 2007, P KDD CUP WORKSH
[3]  
Biesialska Magdalena, 2020, COLING
[4]  
Billsus D., 1998, ICML
[5]  
Bruna J, 2014, Arxiv, DOI [arXiv:1312.6203, DOI 10.48550/ARXIV.1312.6203]
[6]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[7]   Graph Representation Learning via Hard and Channel-Wise Attention Networks [J].
Gao, Hongyang ;
Ji, Shuiwang .
KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, :741-749
[8]  
Gilmer J, 2017, PR MACH LEARN RES, V70
[9]   node2vec: Scalable Feature Learning for Networks [J].
Grover, Aditya ;
Leskovec, Jure .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :855-864
[10]  
Gugger Sylvain, Fine-tuning a pretrained model