Denoising Implicit Feedback for Recommendation

被引:125
作者
Wang, Wenjie [1 ]
Feng, Fuli [1 ]
He, Xiangnan [2 ]
Nie, Liqiang [3 ]
Chua, Tat-Seng [1 ]
机构
[1] Natl Univ Singapore, Singapore, Singapore
[2] Univ Sci & Technol China, Hefei, Peoples R China
[3] Shandong Univ, Qingdao, Peoples R China
来源
WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING | 2021年
基金
中国国家自然科学基金; 新加坡国家研究基金会;
关键词
Recommender System; False-positive Feedback; Adaptive Denoising Training;
D O I
10.1145/3437963.3441800
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The ubiquity of implicit feedback makes them the default choice to build online recommender systems. While the large volume of implicit feedback alleviates the data sparsity issue, the downside is that they are not as clean in reflecting the actual satisfaction of users. For example, in E-commerce, a large portion of clicks do not translate to purchases, and many purchases end up with negative reviews. As such, it is of critical importance to account for the inevitable noises in implicit feedback for recommender training. However, little work on recommendation has taken the noisy nature of implicit feedback into consideration. In this work, we explore the central theme of denoising implicit feedback for recommender training. We find serious negative impacts of noisy implicit feedback, i.e., fitting the noisy data hinders the recommender from learning the actual user preference. Our target is to identify and prune the noisy interactions, so as to improve the efficacy of recommender training. By observing the process of normal recommender training, we find that noisy feedback typically has large loss values in the early stages. Inspired by this observation, we propose a new training strategy named Adaptive Denoising Training (ADT), which adaptively prunes noisy interactions during training. Specifically, we devise two paradigms for adaptive loss formulation: Truncated Loss that discards the large-loss samples with a dynamic threshold in each iteration; and Reweighted Loss that adaptively lowers the weights of large-loss samples. We instantiate the two paradigms on the widely used binary cross-entropy loss and test the proposed ADT strategies on three representative recommenders. Extensive experiments on three benchmarks demonstrate that ADT significantly improves the quality of recommendation over normal training.
引用
收藏
页码:373 / 381
页数:9
相关论文
共 45 条
  • [1] [Anonymous], 2018, PR MACH LEARN RES
  • [2] [Anonymous], 2022, Recommender Systems Handbook, DOI DOI 10.1007/978-1-0716-2197-415
  • [3] Arpit D, 2017, PR MACH LEARN RES, V70
  • [4] Bengio Y., 2009, P 26 ANN INT C MACH, DOI DOI 10.1145/1553374.15533802,5
  • [5] Identifying mislabeled training data
    Brodley, CE
    Friedl, MA
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 1999, 11 : 131 - 167
  • [6] Mutations Strengthened SARS-CoV-2 Infectivity
    Chen, Jiahui
    Wang, Rui
    Wang, Menglun
    Wei, Guo-Wei
    [J]. JOURNAL OF MOLECULAR BIOLOGY, 2020, 432 (19) : 5212 - 5226
  • [7] λOpt: Learn to Regularize Recommender Models in Finer Levels
    Chen, Yihong
    Chen, Bei
    He, Xiangnan
    Gao, Chen
    Li, Yong
    Lou, Jian-Guang
    Wang, Yue
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 978 - 986
  • [8] MMALFM: Explainable Recommendation by Leveraging Reviews and Images
    Cheng, Zhiyong
    Chang, Xiaojun
    Zhu, Lei
    Kanjirathinkal, Rose C.
    Kankanhalli, Mohan
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2019, 37 (02)
  • [9] Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure
    Feng, Fuli
    He, Xiangnan
    Tang, Jie
    Chua, Tat-Seng
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2021, 33 (06) : 2493 - 2504
  • [10] Learning on Partial-Order Hypergraphs
    Feng, Fuli
    He, Xiangnan
    Liu, Yiqun
    Nie, Liqiang
    Chua, Tat-Seng
    [J]. WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), 2018, : 1523 - 1532