Reinforcement online learning to rank with unbiased reward shaping

被引:0
|
作者
Shengyao Zhuang
Zhihao Qiao
Guido Zuccon
机构
[1] The University of Queensland,
来源
关键词
Online learning to rank; Unbiased reward shaping; Reinforcement learning;
D O I
暂无
中图分类号
学科分类号
摘要
Online learning to rank (OLTR) aims to learn a ranker directly from implicit feedback derived from users’ interactions, such as clicks. Clicks however are a biased signal: specifically, top-ranked documents are likely to attract more clicks than documents down the ranking (position bias). In this paper, we propose a novel learning algorithm for OLTR that uses reinforcement learning to optimize rankers: Reinforcement Online Learning to Rank (ROLTR). In ROLTR, the gradients of the ranker are estimated based on the rewards assigned to clicked and unclicked documents. In order to de-bias the users’ position bias contained in the reward signals, we introduce unbiased reward shaping functions that exploit inverse propensity scoring for clicked and unclicked documents. The fact that our method can also model unclicked documents provides a further advantage in that less users interactions are required to effectively train a ranker, thus providing gains in efficiency. Empirical evaluation on standard OLTR datasets shows that ROLTR achieves state-of-the-art performance, and provides significantly better user experience than other OLTR approaches. To facilitate the reproducibility of our experiments, we make all experiment code available at https://github.com/ielab/OLTR.
引用
收藏
页码:386 / 413
页数:27
相关论文
共 50 条
  • [31] Principled reward shaping for reinforcement learning via lyapunov stability theory
    Dong, Yunlong
    Tang, Xiuchuan
    Yuan, Ye
    NEUROCOMPUTING, 2020, 393 : 83 - 90
  • [32] Comprehensive Overview of Reward Engineering and Shaping in Advancing Reinforcement Learning Applications
    Ibrahim, Sinan
    Mostafa, Mostafa
    Jnadi, Ali
    Salloum, Hadi
    Osinenko, Pavel
    IEEE ACCESS, 2024, 12 : 175473 - 175500
  • [33] A new Potential-Based Reward Shaping for Reinforcement Learning Agent
    Badnava, Babak
    Esmaeili, Mona
    Mozayani, Nasser
    Zarkesh-Ha, Payman
    2023 IEEE 13TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE, CCWC, 2023, : 630 - 635
  • [34] Subgoal-Based Reward Shaping to Improve Efficiency in Reinforcement Learning
    Okudo, Takato
    Yamada, Seiji
    IEEE ACCESS, 2021, 9 : 97557 - 97568
  • [35] An Improvement on Mapless Navigation with Deep Reinforcement Learning: A Reward Shaping Approach
    Alipanah, Arezoo
    Moosavian, S. Ali A.
    2022 10TH RSI INTERNATIONAL CONFERENCE ON ROBOTICS AND MECHATRONICS (ICROM), 2022, : 261 - 266
  • [36] Bi-level Optimization Method for Automatic Reward Shaping of Reinforcement Learning
    Wang, Ludi
    Wang, Zhaolei
    Gong, Qinghai
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT III, 2022, 13531 : 382 - 393
  • [37] Temporal-Logic-Based Reward Shaping for Continuing Reinforcement Learning Tasks
    Jiang, Yuqian
    Bharadwaj, Suda
    Wu, Bo
    Shah, Rishi
    Topcu, Ufuk
    Stone, Peter
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 7995 - 8003
  • [38] A Reward Shaping Approach for Reserve Price Optimization using Deep Reinforcement Learning
    Afshar, Reza Refaei
    Rhuggenaath, Jason
    Zhang, Yingqian
    Kaymak, Uzay
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [39] Exploration-Guided Reward Shaping for Reinforcement Learning under Sparse Rewards
    Devidze, Rati
    Kamalaruban, Parameswaran
    Singla, Adish
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [40] Offline reward shaping with scaling human preference feedback for deep reinforcement learning
    Li, Jinfeng
    Luo, Biao
    Xu, Xiaodong
    Huang, Tingwen
    NEURAL NETWORKS, 2025, 181