gSASRec: Reducing Overconfidence in Sequential Recommendation Trained with Negative Sampling

被引:17
作者
Petrov, Aleksandr Vladimirovich [1 ]
Macdonald, Craig [1 ]
机构
[1] Univ Glasgow, Glasgow, Lanark, Scotland
来源
PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023 | 2023年
关键词
D O I
10.1145/3604915.3608783
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A large catalogue size is one of the central challenges in training recommendation models: a large number of items makes them memory and computationally inefficient to compute scores for all items during training, forcing these models to deploy negative sampling. However, negative sampling increases the proportion of positive interactions in the training data, and therefore models trained with negative sampling tend to overestimate the probabilities of positive interactions - a phenomenon we call overconfidence. While the absolute values of the predicted scores/probabilities are not important for the ranking of retrieved recommendations, overconfident models may fail to estimate nuanced differences in the top-ranked items, resulting in degraded performance. In this paper, we show that overconfidence explains why the popular SAS-Rec model underperforms when compared to BERT4Rec. This is contrary to the BERT4Rec authors' explanation that the difference in performance is due to the bi-directional attention mechanism. To mitigate overconfidence, we propose a novel Generalised Binary Cross-Entropy Loss function (gBCE) and theoretically prove that it can mitigate overconfidence. We further propose the gSASRec model, an improvement over SASRec that deploys an increased number of negatives and the gBCE loss. We show through detailed experiments on three datasets that gSASRec does not exhibit the overconfidence problem. As a result, gSASRec can outperform BERT4Rec (e.g. +9.47% NDCG on the MovieLens-1M dataset), while requiring less training time (e.g. -73% training time on MovieLens-1M). Moreover, in contrast to BERT4Rec, gSASRec is suitable for large datasets that contain more than 1 million items.
引用
收藏
页码:116 / 128
页数:13
相关论文
共 46 条
[1]  
Acquavia Antonio, 2023, PROC DOCENG
[2]  
[Anonymous], 1998, PROC BROADCAST NEWS
[3]  
[Anonymous], 2010, P 19 INT C WORLD WID, DOI DOI 10.1145/1772690.1772773
[4]  
Burges C.J., 2010, Learning, V11, P81
[5]   On Target Item Sampling in Offline Recommender System Evaluation [J].
Canamares, Rocio ;
Castells, Pablo .
RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, :259-268
[6]  
Chen YJ, 2022, Arxiv, DOI arXiv:2208.03645
[7]  
Cho Eunjoon, 2011, P 17 ACM SIGKDD INT, P1082, DOI 10.1145/2020408.2020579
[8]  
Cormack GV, 1999, SIGIR'99: PROCEEDINGS OF 22ND INTERNATIONAL CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, P273
[9]   A Case Study on Sampling Strategies for Evaluating Neural Sequential Item Recommendation Models [J].
Dallmann, Alexander ;
Zoller, Daniel ;
Hotho, Andreas .
15TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS 2021), 2021, :505-514
[10]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171