Reinforcement online learning to rank with unbiased reward shaping

被引:0
作者
Shengyao Zhuang
Zhihao Qiao
Guido Zuccon
机构
[1] The University of Queensland,
来源
Information Retrieval Journal | 2022年 / 25卷
关键词
Online learning to rank; Unbiased reward shaping; Reinforcement learning;
D O I
暂无
中图分类号
学科分类号
摘要
Online learning to rank (OLTR) aims to learn a ranker directly from implicit feedback derived from users’ interactions, such as clicks. Clicks however are a biased signal: specifically, top-ranked documents are likely to attract more clicks than documents down the ranking (position bias). In this paper, we propose a novel learning algorithm for OLTR that uses reinforcement learning to optimize rankers: Reinforcement Online Learning to Rank (ROLTR). In ROLTR, the gradients of the ranker are estimated based on the rewards assigned to clicked and unclicked documents. In order to de-bias the users’ position bias contained in the reward signals, we introduce unbiased reward shaping functions that exploit inverse propensity scoring for clicked and unclicked documents. The fact that our method can also model unclicked documents provides a further advantage in that less users interactions are required to effectively train a ranker, thus providing gains in efficiency. Empirical evaluation on standard OLTR datasets shows that ROLTR achieves state-of-the-art performance, and provides significantly better user experience than other OLTR approaches. To facilitate the reproducibility of our experiments, we make all experiment code available at https://github.com/ielab/OLTR.
引用
收藏
页码:386 / 413
页数:27
相关论文
共 50 条
  • [41] Maximum reward reinforcement learning: A non-cumulative reward criterion
    Quah, K. H.
    Quek, Chai
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2006, 31 (02) : 351 - 359
  • [42] Reinforcement learning and the reward positivity with aversive outcomes
    Bauer, Elizabeth A.
    Watanabe, Brandon K.
    Macnamara, Annmarie
    [J]. PSYCHOPHYSIOLOGY, 2024, 61 (04)
  • [43] A Modified Average Reward Reinforcement Learning Based on Fuzzy Reward Function
    Zhai, Zhenkun
    Chen, Wei
    Li, Xiong
    Guo, Jing
    [J]. IMECS 2009: INTERNATIONAL MULTI-CONFERENCE OF ENGINEERS AND COMPUTER SCIENTISTS, VOLS I AND II, 2009, : 113 - 117
  • [44] Skill Reward for Safe Deep Reinforcement Learning
    Cheng, Jiangchang
    Yu, Fumin
    Zhang, Hongliang
    Dai, Yinglong
    [J]. UBIQUITOUS SECURITY, 2022, 1557 : 203 - 213
  • [45] On the Power of Global Reward Signals in Reinforcement Learning
    Kemmerich, Thomas
    Buening, Hans Kleine
    [J]. MULTIAGENT SYSTEM TECHNOLOGIES, 2011, 6973 : 53 - +
  • [46] Option compatible reward inverse reinforcement learning
    Hwang, Rakhoon
    Lee, Hanjin
    Hwang, Hyung Ju
    [J]. PATTERN RECOGNITION LETTERS, 2022, 154 : 83 - 89
  • [47] Reinforcement learning with nonstationary reward depending on the episode
    Shibuya, Takeshi
    Yasunobu, Seiji
    [J]. 2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2011, : 2145 - 2150
  • [48] Shaping reward learning approach from passive samples
    Qian, Yu
    Yu, Yang
    Zhou, Zhi-Hua
    [J]. Ruan Jian Xue Bao/Journal of Software, 2013, 24 (11): : 2667 - 2675
  • [49] Learning Potential in Subgoal-Based Reward Shaping
    Okudo, Takato
    Yamada, Seiji
    [J]. IEEE ACCESS, 2023, 11 : 17116 - 17137
  • [50] Learning Robot Manipulation based on Modular Reward Shaping
    Kim, Seonghyun
    Jang, Ingook
    Kim, Hyunseok
    Park, Chan-Won
    Park, Jun Hee
    [J]. 11TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE: DATA, NETWORK, AND AI IN THE AGE OF UNTACT (ICTC 2020), 2020, : 883 - 886