Optimizing Long-term Value for Auction-Based Recommender Systems via On-Policy Reinforcement Learning

被引:3
作者
Xu, Ruiyang [1 ]
Bhandari, Jalaj [1 ]
Korenkevych, Dmytro [1 ]
Liu, Fan [1 ]
He, Yuchen [1 ]
Nikulkov, Alex [1 ]
Zhu, Zheqing [1 ]
机构
[1] Meta AI, Menlo Pk, CA USA
来源
PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023 | 2023年
关键词
Reinforcement learning; Recommender systems; Long-term user engagement; Policy improvement;
D O I
10.1145/3604915.3608854
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Auction-based recommender systems are prevalent in online advertising platforms, but they are typically optimized to allocate recommendation slots based on immediate expected return metrics, neglecting the downstream effects of recommendations on user behavior. In this study, we employ reinforcement learning to optimize for long-term return metrics in an auction-based recommender system. Utilizing temporal difference learning, a fundamental reinforcement learning algorithm, we implement a one-step policy improvement approach that biases the system towards recommendations with higher long-term user engagement metrics. This optimizes value over long horizons while maintaining compatibility with the auction framework. Our approach is grounded in dynamic programming ideas which show that our method provably improves upon the existing auction-based base policy. Through an online A/B test conducted on an auction-based recommender system which handles billions of impressions and users daily, we empirically establish that our proposed method outperforms the current production system in terms of long-term user engagement metrics.
引用
收藏
页码:955 / 962
页数:8
相关论文
共 44 条
[1]  
Bertsekas DimitriP., 2017, DYNAMIC PROGRAMMING, V1
[2]   Off-Policy Actor-critic for Recommender Systems [J].
Chen, Minmin ;
Xu, Can ;
Gatto, Vince ;
Jain, Devanshu ;
Kumar, Aviral ;
Chi, Ed .
PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, :338-349
[3]   Top-K Off-Policy Correction for a REINFORCE Recommender System [J].
Chen, Minmin ;
Beutel, Alex ;
Covington, Paul ;
Jain, Sagar ;
Belletti, Francois ;
Chi, Ed H. .
PROCEEDINGS OF THE TWELFTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'19), 2019, :456-464
[4]  
Chen XS, 2019, 36 INT C MACHINE LEA, V97
[5]  
Cheng Heng-Tze., 2016, P 1 WORKSHOP DEEP LE, P7, DOI 10.1145/2988450.2988454
[6]   Deep Neural Networks for YouTube Recommendations [J].
Covington, Paul ;
Adams, Jay ;
Sargin, Emre .
PROCEEDINGS OF THE 10TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'16), 2016, :191-198
[7]   Internet advertising and the generalized second-price auction: Selling billions of dollars worth of keywords [J].
Edelman, Benjamin ;
Ostrovsky, Michael ;
Schwarz, Michael .
AMERICAN ECONOMIC REVIEW, 2007, 97 (01) :242-259
[8]   Collaborative filtering recommender systems [J].
Ekstrand M.D. ;
Riedl J.T. ;
Konstan J.A. .
Foundations and Trends in Human-Computer Interaction, 2010, 4 (02) :81-173
[9]  
Evans DS, 2008, REV NETW ECON, V7, P359
[10]   The Netflix Recommender System: Algorithms, Business Value, and Innovation [J].
Gomez-Uribe, Carlos A. ;
Hunt, Neil .
ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS, 2016, 6 (04)