Recommendations with Negative Feedback via Pairwise Deep Reinforcement Learning

被引:231
作者
Zhao, Xiangyu [1 ]
Zhang, Liang [2 ]
Ding, Zhuoye [3 ]
Xia, Long [3 ]
Tang, Jiliang [1 ]
Yin, Dawei [3 ]
机构
[1] Michigan State Univ, Data Sci & Engn Lab, E Lansing, MI 48824 USA
[2] JD Com, Intelligent Advertising Lab, Data Sci Lab, Beijing, Peoples R China
[3] JD Com, Data Sci Lab, Beijing, Peoples R China
来源
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING | 2018年
基金
美国国家科学基金会;
关键词
Recommender System; Deep Reinforcement Learning; Pairwise Deep Q-Network;
D O I
10.1145/3219819.3219886
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedback. Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations. However, the number of negative feedback is much larger than that of positive one; thus incorporating them simultaneously is challenging since positive feedback could be buried by negative one. In this paper, we develop a novel approach to incorporate them into the proposed deep recommender system (DEERS) framework. The experimental results based on real-world e-commerce data demonstrate the effectiveness of the proposed framework. Further experiments have been conducted to understand the importance of both positive and negative feedback in recommendations.
引用
收藏
页码:1040 / 1048
页数:9
相关论文
共 34 条
[1]  
Akerkar R.Sajja., 2010, KNOWL-BASED SYST
[2]  
[Anonymous], 1993, P TECHN REP DTIC DOC, DOI 10.5555/168871
[3]  
[Anonymous], 1997, AAAI IAAI
[4]  
[Anonymous], 2015, ARXIV151106939
[5]  
[Anonymous], 2013, Playing atari with deep reinforcement learning
[6]  
[Anonymous], ARXIV180502343
[7]  
[Anonymous], 2013, Dynamic programming, courier corpora
[8]  
[Anonymous], 2017, ARXIV170707435
[9]  
[Anonymous], 2015, ARXIV151201124
[10]  
Breese J. S., 1998, Uncertainty in Artificial Intelligence. Proceedings of the Fourteenth Conference (1998), P43