Adversarial Gradient Driven Exploration for Deep Click-Through Rate Prediction

被引:10
作者
Wu, Kailun [1 ]
Bian, Weijie [1 ]
Chan, Zhangming [1 ]
Ren, Lejian [1 ]
Xiang, Shiming [2 ]
Han, Shuguang [1 ]
Deng, Hongbo [1 ]
Zheng, Bo [1 ]
机构
[1] Alibaba Grp, Beijing, Peoples R China
[2] Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022 | 2022年
关键词
Exploration and Exploitation; Recommender Systems; Click-Through Rate Prediction; Online Advertising;
D O I
10.1145/3534678.3539461
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Exploration-Exploitation (E&E) algorithms are commonly adopted to deal with the feedback-loop issue in large-scale online recommender systems. Most of existing studies believe that high uncertainty can be a good indicator of potential reward, and thus primarily focus on the estimation of model uncertainty. We argue that such an approach overlooks the subsequent effect of exploration on model training. From the perspective of online learning, the adoption of an exploration strategy would also affect the collecting of training data, which further influences model learning. To understand the interaction between exploration and training, we design a Pseudo-Exploration module that simulates the model updating process after a certain item is explored and the corresponding feedback is received. We further show that such a process is equivalent to adding an adversarial perturbation to the model input, and thereby name our proposed approach as an the Adversarial Gradient Driven Exploration (AGE). For production deployment, we propose a dynamic gating unit to pre-determine the utility of an exploration. This enables us to utilize the limited amount of resources for exploration, and avoid wasting pageview resources on ineffective exploration. The effectiveness of AGE was firstly examined through an extensive number of ablation studies on an academic dataset. Meanwhile, AGE has also been deployed to one of the world-leading display advertising platforms, and we observe significant improvements on various top-line evaluation metrics.
引用
收藏
页码:2050 / 2058
页数:9
相关论文
共 47 条
[1]   Reinforcement learning with immediate rewards and linear hypotheses [J].
Abe, N ;
Biermann, AW ;
Long, PM .
ALGORITHMICA, 2003, 37 (04) :263-293
[2]  
Agrawal S., 2013, INT C MACHINE LEARNI, V28, P127, DOI DOI 10.5555/3042817.3043073
[3]   A neural networks committee for the contextual bandit problem [J].
Allesiardo, Robin, 1600, Springer Verlag (8834) :374-381
[4]  
Anderson T, 2008, THEORY AND PRACTICE OF ONLINE LEARNING, 2ND EDITION, P45
[5]  
[Anonymous], 2017, ARXIV171102487
[6]  
[Anonymous], 2021, SIGKDD, DOI DOI 10.1145/3447548.3467299
[7]  
[Anonymous], 2011, UNBIASED OFFLINE EVA
[8]  
[Anonymous], 2014, RECSYS, DOI DOI 10.1145/2645710.2645733
[9]  
[Anonymous], 2017, SIGKDD, DOI DOI 10.1145/3097983.3098041
[10]  
Auer P, 2003, SIAM J COMPUT, V32, P48, DOI 10.1137/S0097539701398375