Offline Reinforcement Learning for Mobile Notifications

被引:4
作者
Yuan, Yiping [1 ]
Muralidharan, Ajith [1 ]
Nandy, Preetam [1 ]
Cheng, Miao [1 ]
Prabhakar, Prakruthi [1 ]
机构
[1] LinkedIn Corp, Mountain View, CA 94043 USA
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022 | 2022年
关键词
Reinforcement learning; off-policy evaluation; mobile notifications;
D O I
10.1145/3511808.3557083
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile notification systems have taken a major role in driving and maintaining user engagement for online platforms. They are interesting recommender systems to machine learning practitioners with more sequential and long-term feedback considerations. Most machine learning applications in notification systems are built around response-prediction models, trying to attribute both short-term impact and long-term impact to a notification decision. However, a user's experience depends on a sequence of notifications and attributing impact to a single notification is not always accurate, if not impossible. In this paper, we argue that reinforcement learning is a better framework for notification systems in terms of performance and iteration speed. We propose an offline reinforcement learning framework to optimize sequential notification decisions for driving user engagement. We describe a state-marginalized importance sampling policy evaluation approach, which can be used to evaluate the policy offline and tune learning hyperparameters. Through simulations that approximate the notifications ecosystem, we demonstrate the performance and benefits of the offline evaluation approach as a part of the reinforcement learning modeling approach. Finally, we collect data through online exploration in the production system, train an offline Double Deep Q-Network and launch a successful policy online. We also discuss the practical considerations and results obtained by deploying these policies for a large-scale recommendation system use-case.
引用
收藏
页码:3614 / 3623
页数:10
相关论文
共 44 条
  • [1] Agarwal Deepak, 2011, P 17 ACM SIGKDD INT, P132
  • [2] Agarwal R., 2019, Striving for simplicity in off-policy deep reinforcement learning
  • [3] [Anonymous], 2016, P 33 INT C MACH LEAR
  • [4] Basu Kinjal, 2020, INT JOINT C ART INT
  • [5] Brockman G., 2016, ARXIV PREPRINT ARXIV
  • [6] Top-K Off-Policy Correction for a REINFORCE Recommender System
    Chen, Minmin
    Beutel, Alex
    Covington, Paul
    Jain, Sagar
    Belletti, Francois
    Chi, Ed H.
    [J]. PROCEEDINGS OF THE TWELFTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'19), 2019, : 456 - 464
  • [7] Fujimoto S, 2019, PR MACH LEARN RES, V97
  • [8] Near Real-time Optimization of Activity-based Notifications
    Gao, Yan
    Gupta, Viral
    Yan, Jinyun
    Shi, Changji
    Tao, Zhongen
    Xiao, P. J.
    Wang, Curtis
    Yu, Shipeng
    Rosales, Romer
    Muralidharan, Ajith
    Chatterjee, Shaunak
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 283 - 292
  • [9] Email Volume Optimization at LinkedIn
    Gupta, Rupesh
    Liang, Guanfeng
    Tseng, Hsiao-Ping
    Vijay, Ravi Kiran Holur
    Chen, Xiaoyu
    Rosales, Romer
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 97 - 106
  • [10] Optimizing Email Volume For Sitewide Engagement
    Gupta, Rupesh
    Liang, Guanfeng
    Rosales, Romer
    [J]. CIKM'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2017, : 1947 - 1955