Cascaded Cross Attention for Review -based Sequential Recommendation

被引:2
作者
Huang, Bingsen [1 ]
Luo, Jinwei [1 ,3 ]
Du, Weihao [1 ]
Pan, Weike [1 ]
Ming, Zhong [1 ,2 ]
机构
[1] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen, Peoples R China
[2] Shenzhen Univ, Guangdong Lab Artificial Intelligence & Digital E, Shenzhen, Peoples R China
[3] Tencent Mus Entertainment, Shenzhen, Peoples R China
来源
23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023 | 2023年
基金
中国国家自然科学基金;
关键词
Review-based Recommendation; Sequential Recommendation; Gating Mechanism; Cascaded Cross Attention;
D O I
10.1109/ICDM58522.2023.00026
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, sequential recommendation (SR) has gained significant attention in the reconunender systems community. However, most previous works only consider the (user, item, timestep) interaction sequences, which limits the recommendation performance. To overcome this limitation, some studies have utilized user reviews to enrich the understanding of user preferences. However, existing review -based sequential recommendation (RBSR) methods only use either a user's review on items or an item's reviews by users, overlooking their complementary nature. In addition, most existing RBSR methods use a simple dot-product operation between the embeddings of a user and the candidate items for scoring, which may not adequately capture the complex relationships among the item sequence, review sequence and candidate items. To release the potential of RBSR, we propose a novel model called cascaded cross attention (CCA), which utilizes aggregated reviews to compensate for the information that is lacking in individual reviews. Moreover, we propose a cascaded cross-attention layer to better capture the dependency intra a sequence and the relationships between a sequence and the candidate items. Extensive experimental results on three public datasets demonstrate that our CCA outperforms the state-of-the-art methods. Additionally, the case study and visualization results showcase high interpretability of our CCA.
引用
收藏
页码:170 / 179
页数:10
相关论文
共 32 条
[1]  
[Anonymous], 2016, P 25 INT JOINT C ART
[2]  
[Anonymous], 2010, P 19 INT C WORLD WID, DOI DOI 10.1145/1772690.1772773
[3]  
Bao Y, 2014, AAAI CONF ARTIF INTE, P2
[4]   Latent Dirichlet allocation [J].
Blei, DM ;
Ng, AY ;
Jordan, MI .
JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) :993-1022
[5]   Neural Attentional Rating Regression with Review-level Explanations [J].
Chen, Chong ;
Zhang, Min ;
Liu, Yiqun ;
Ma, Shaoping .
WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), 2018, :1583-1592
[6]  
Chen X, 2019, AAAI CONF ARTIF INTE, P53
[7]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[8]   Set-Sequence-Graph: A Multi-View Approach Towards Exploiting Reviews for Recommendation [J].
Gao, Jingyue ;
Lin, Yang ;
Wang, Yasha ;
Wang, Xiting ;
Yang, Zhao ;
He, Yuanduo ;
Chu, Xu .
CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, :395-404
[9]  
He RN, 2016, IEEE DATA MINING, P191, DOI [10.1109/ICDM.2016.0030, 10.1109/ICDM.2016.88]
[10]  
Hidasi B., 2015, P 4 INT C LEARNING R