Learning Recommenders for Implicit Feedback with Importance Resampling

被引:15
作者
Chen, Jin [1 ]
Lian, Defu [2 ]
Jin, Binbin [3 ]
Zheng, Kai [1 ]
Chen, Enhong [2 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Peoples R China
[2] Univ Sci & Technol China, Hefei, Peoples R China
[3] Huawei Cloud Comp Technol Co Ltd, Langfang, Peoples R China
来源
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22) | 2022年
基金
中国国家自然科学基金;
关键词
Importance Resampling; Negative Sampling; Implicit Feedback; Recommender Systems;
D O I
10.1145/3485447.3512075
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recommendation is prevalently studied for implicit feedback recently, but it seriously suffers from the lack of negative samples, which has a significant impact on the training of recommendation models. Existing negative sampling is based on the static or adaptive probability distributions. Sampling from the adaptive probability receives more attention, since it tends to generate more hard examples, to make recommender training faster to converge. However, item sampling becomes much more time-consuming particularly for complex recommendation models. In this paper, we propose an Adaptive Sampling method based on Importance Resampling (AdaSIR for short), which is not only almost equally efficient and accurate for any recommender models, but also can robustly accommodate arbitrary proposal distributions. More concretely, AdaSIR maintains a contextualized sample pool of fixed-size with importance resampling, from which items are only uniformly sampled. Such a simple sampling method can be proved to provide approximately accurate adaptive sampling under some conditions. The sample pool plays two extra important roles in (1) reusing historical hard samples with certain probabilities; (2) estimating the rank of positive samples forweighting, such that recommender training can concentrate more on difficult positive samples. Extensive empirical experiments demonstrate that AdaSIR outperforms state-of-the-art methods in terms of sampling efficiency and effectiveness.
引用
收藏
页码:1997 / 2005
页数:9
相关论文
共 42 条
[1]  
[Anonymous], 1963, COLLECTED WORKS W HO
[2]  
[Anonymous], P 26 INT C WORLD
[3]  
[Anonymous], 2005, AISTATS 2005 P 10 IN
[4]  
Bai Yu, 2017, ARXIV170703073
[5]   A Generic Coordinate Descent Framework for Learning from Implicit Feedback [J].
Bayer, Immanuel ;
He, Xiangnan ;
Kanagal, Bhargav ;
Rendle, Steffen .
PROCEEDINGS OF THE 26TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB (WWW'17), 2017, :1341-1350
[6]   Adaptive importance sampling to accelerate training of a neural probabilistic language model [J].
Bengio, Yoshua ;
Senecal, Jean-Sebastien .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (04) :713-722
[7]  
Blanc G, 2018, PR MACH LEARN RES, V80
[8]   Collaborative Filtering With Ranking-Based Priors on Unknown Ratings [J].
Chen, Jin ;
Lian, Defu ;
Zheng, Kai .
IEEE INTELLIGENT SYSTEMS, 2020, 35 (05) :38-49
[9]  
Chen J, 2019, AAAI CONF ARTIF INTE, P37
[10]   On Sampling Strategies for Neural Network-based Collaborative Filtering [J].
Chen, Ting ;
Sun, Yizhou ;
Shi, Yue ;
Hong, Liangjie .
KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2017, :767-776