BKGNN-TI: A Bilinear Knowledge-Aware Graph Neural Network Fusing Text Information for Recommendation

被引:5
作者
Zhang, Yang [1 ]
Li, Chuanzhen [2 ]
Cai, Juanjuan [2 ]
Liu, Yuchen [1 ]
Wang, Hui [2 ]
机构
[1] Commun Univ China, Sch Informat & Commun Engn, Beijing 100024, Peoples R China
[2] Commun Univ China, State Key Lab Media Convergence & Commun, Beijing 100024, Peoples R China
关键词
Recommender systems; Knowledge graph; Bilinear collector; Feature interaction;
D O I
10.1007/s44196-022-00154-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge graph (KG)-based recommendation methods effectively alleviate the data sparsity and cold-start problems in collaborative filtering. Among these methods, neighborhood-based methods are the mainstream methods. However, these methods ignore some meta-information about the items, specifically, the diversity of item information (e.g., texts) and feature interaction between neighboring nodes. In this paper, we propose a Bilinear Knowledge-aware Graph Neural Network Fusing Text Information (BKGNN-TI), which can model both knowledge graph information and text information. In particular, the information in knowledge graph contains not only the existing high-order connectivity but also feature interactions between neighboring nodes at the same level in KG. First, we construct the information propagation layer using the bilinear collector and linear collector. Feature interactions between neighboring nodes and the high-order connectivity are collected in the information propagation layer to generate the item knowledge representations. The bilinear collector emphasizes the importance of second-order feature interaction between neighboring nodes in the KG. Then, texts are also introduced when computing the item representations, which can help further infer user interests. We choose objective program titles and introductions as text information to avoid the influence of subjective factors. BKGNN-TI designs an ALBERT-based sequence encoder to encode texts by the structure of ALBERT+Bi-LSTM+Attention, thus enriching the feature representations of the items. In the experiments, we utilize two language datasets, i.e., the English public dataset Movielens-20M and the Chinese dataset IPTV constructed by ourselves. The results demonstrate that our BKGNN-TI outperforms baselines, indicating that our BKGNN-TI is a generalization for both Chinese and English datasets.
引用
收藏
页数:20
相关论文
共 43 条
[1]   Learning Heterogeneous Knowledge Base Embeddings for Explainable Recommendation [J].
Ai, Qingyao ;
Azizi, Vahid ;
Chen, Xu ;
Zhang, Yongfeng .
ALGORITHMS, 2018, 11 (09)
[2]  
Bordes A., 2013, P 26 INT C NEURAL IN, P2787
[3]  
Fang Y, 2016, PROC INT CONF DATA, P277, DOI 10.1109/ICDE.2016.7498247
[4]  
Graves A, 2013, INT CONF ACOUST SPEE, P6645, DOI 10.1109/ICASSP.2013.6638947
[5]  
He RN, 2016, AAAI CONF ARTIF INTE, P144
[6]  
He X., 2015, P 24 ACM INT C INFOR, P1661, DOI DOI 10.1145/2806416.2806504
[7]  
Koren Yehuda., 2008, Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, P426, DOI DOI 10.1145/1401890.1401944
[8]  
Lan Z., 2019, Albert: A lite bert for self-supervised learning of language representations
[9]   Ratings Meet Reviews, a Combined Approach to Recommend [J].
Ling, Guang ;
Lyu, Michael R. ;
King, Irwin .
PROCEEDINGS OF THE 8TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'14), 2014, :105-112
[10]   DAML: Dual Attention Mutual Learning between Ratings and Reviews for Item Recommendation [J].
Liu, Donghua ;
Li, Jing ;
Du, Bo ;
Chang, Jun ;
Gao, Rong .
KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, :344-352