Reinforcement online learning to rank with unbiased reward shaping

被引:0
作者
Shengyao Zhuang
Zhihao Qiao
Guido Zuccon
机构
[1] The University of Queensland,
来源
Information Retrieval Journal | 2022年 / 25卷
关键词
Online learning to rank; Unbiased reward shaping; Reinforcement learning;
D O I
暂无
中图分类号
学科分类号
摘要
Online learning to rank (OLTR) aims to learn a ranker directly from implicit feedback derived from users’ interactions, such as clicks. Clicks however are a biased signal: specifically, top-ranked documents are likely to attract more clicks than documents down the ranking (position bias). In this paper, we propose a novel learning algorithm for OLTR that uses reinforcement learning to optimize rankers: Reinforcement Online Learning to Rank (ROLTR). In ROLTR, the gradients of the ranker are estimated based on the rewards assigned to clicked and unclicked documents. In order to de-bias the users’ position bias contained in the reward signals, we introduce unbiased reward shaping functions that exploit inverse propensity scoring for clicked and unclicked documents. The fact that our method can also model unclicked documents provides a further advantage in that less users interactions are required to effectively train a ranker, thus providing gains in efficiency. Empirical evaluation on standard OLTR datasets shows that ROLTR achieves state-of-the-art performance, and provides significantly better user experience than other OLTR approaches. To facilitate the reproducibility of our experiments, we make all experiment code available at https://github.com/ielab/OLTR.
引用
收藏
页码:386 / 413
页数:27
相关论文
共 50 条
  • [31] Potential-based reward shaping using state-space segmentation for efficiency in reinforcement learning
    Bal, Melis Ilayda
    Aydin, Hueseyin
    Iyiguen, Cem
    Polat, Faruk
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 157 : 469 - 484
  • [32] A Multi-Dimensional Goal Aircraft Guidance Approach Based on Reinforcement Learning with a Reward Shaping Algorithm
    Zu, Wenqiang
    Yang, Hongyu
    Liu, Renyu
    Ji, Yulong
    SENSORS, 2021, 21 (16)
  • [33] Population-based exploration in reinforcement learning through repulsive reward shaping using eligibility traces
    Bal, Melis Ilayda
    Iyigun, Cem
    Polat, Faruk
    Aydin, Huseyin
    ANNALS OF OPERATIONS RESEARCH, 2024, 335 (02) : 689 - 725
  • [34] Actively learning costly reward functions for reinforcement learning
    Eberhard, Andre
    Metni, Houssam
    Fahland, Georg
    Stroh, Alexander
    Friederich, Pascal
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2024, 5 (01):
  • [35] Learning classifier system with average reward reinforcement learning
    Zang, Zhaoxiang
    Li, Dehua
    Wang, Junying
    Xia, Dan
    KNOWLEDGE-BASED SYSTEMS, 2013, 40 : 58 - 71
  • [36] Reinforcement Learning Control With Knowledge Shaping
    Gao, Xiang
    Si, Jennie
    Huang, He
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (03) : 3156 - 3167
  • [37] Shaping the Behavior of Reinforcement Learning Agents
    Sidiropoulos, George
    Kiourt, Chairi
    Sevetlidis, Vasileios
    Pavlidis, George
    25TH PAN-HELLENIC CONFERENCE ON INFORMATICS WITH INTERNATIONAL PARTICIPATION (PCI2021), 2021, : 448 - 453
  • [38] Reinforcement Learning for Data Preparation with Active Reward Learning
    Berti-Equille, Laure
    INTERNET SCIENCE, INSCI 2019, 2019, 11938 : 121 - 132
  • [39] Bottom-up multi-agent reinforcement learning by reward shaping for cooperative-competitive tasks
    Aotani, Takumi
    Kobayashi, Taisuke
    Sugimoto, Kenji
    APPLIED INTELLIGENCE, 2021, 51 (07) : 4434 - 4452
  • [40] Bottom-up multi-agent reinforcement learning by reward shaping for cooperative-competitive tasks
    Takumi Aotani
    Taisuke Kobayashi
    Kenji Sugimoto
    Applied Intelligence, 2021, 51 : 4434 - 4452