Robust Ranking Model via Bias-Variance Optimization

被引:0
|
作者
Li, Jinzhong [1 ,2 ,3 ,4 ]
Liu, Guanjun [3 ,4 ]
Xia, Jiewu [1 ,2 ]
机构
[1] Jinggangshan Univ, Dept Comp Sci & Technol, Jian 343009, Jiangxi, Peoples R China
[2] Univ Elect Sci & Technol China, Network & Data Secur Key Lab Sichuan Prov, Chengdu 610054, Sichuan, Peoples R China
[3] Tongji Univ, Dept Comp Sci & Technol, Shanghai 201804, Peoples R China
[4] Tongji Univ, Minist Educ, Key Lab Embedded Syst & Serv Comp, Shanghai 201804, Peoples R China
关键词
Learning to rank; Ranking model; Effectiveness-robustness tradeoff; Bias-variance tradeoff; LambdaMART algorithm;
D O I
10.1007/978-3-319-63315-2_62
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Improving average effectiveness is an objective of paramount importance of ranking model for the learning to rank task. Another equally important objective is the robustness-a ranking model should minimize the variance of effectiveness across all queries when the ranking model is disturbed. However, most of the existing learning to rank methods are optimizing the average effectiveness over all the queries, and leaving robustness unnoticed. An ideal ranking model is expected to balance the trade-off between effectiveness and robustness by achieving high average effectiveness and low variance of effectiveness. This paper investigates the effectiveness-robustness trade-off in learning to rank from a novel perspective, i.e., the bias-variance trade-off, and presents a unified objective function which captures the trade-off between these two competing measures for jointly optimizing the effectiveness and robustness of ranking model. We modify the gradient based on the unified objective function using LambdaMART which is a state-of-the-art learning to rank algorithm, and demonstrate the strategy of jointly optimizing the combination of bias and variance in a principled learning objective. Experimental results demonstrate that the gradient-modified LambdaMART improves the robustness and normalized effectiveness of ranking model by combining bias and variance.
引用
收藏
页码:706 / 718
页数:13
相关论文
共 50 条
  • [1] Bias-Variance Decomposition for Ranking
    Shivaswamy, Pannaga
    Chandrashekar, Ashok
    WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2021, : 472 - 480
  • [2] Controlling the Bias-Variance Tradeoff via Coherent Risk for Robust Learning with Kernels
    Koppel, Alec
    Bedi, Amrit S.
    Rajawat, Ketan
    2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 3519 - 3525
  • [3] Bias-variance control via hard points shaving
    Merler, S
    Caprile, B
    Furlanello, C
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2004, 18 (05) : 891 - 903
  • [4] Meta-Optimization of Bias-Variance Trade-Off in Stochastic Model Learning
    Aotani, Takumi
    Kobayashi, Taisuke
    Sugimoto, Kenji
    IEEE ACCESS, 2021, 9 : 148783 - 148799
  • [5] Bandit Smooth Convex Optimization: Improving the Bias-Variance Tradeoff
    Dekel, Ofer
    Eldan, Ronen
    Koren, Tomer
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [6] Bias-variance decomposition in Genetic Programming
    Kowaliw, Taras
    Doursat, Rene
    OPEN MATHEMATICS, 2016, 14 : 62 - 80
  • [7] Bias-Variance Decomposition of IR Evaluation
    Zhang, Peng
    Song, Dawei
    Wang, Jun
    Hou, Yuexian
    SIGIR'13: THE PROCEEDINGS OF THE 36TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH & DEVELOPMENT IN INFORMATION RETRIEVAL, 2013, : 1021 - 1024
  • [8] A Bias-Variance Approach for the Nonlocal Means
    Duval, Vincent
    Aujol, Jean-Francois
    Gousseau, Yann
    SIAM JOURNAL ON IMAGING SCIENCES, 2011, 4 (02): : 760 - 788
  • [9] On Feature Selection, Bias-Variance, and Bagging
    Munson, N. Arthur
    Caruana, Rich
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, PT II, 2009, 5782 : 144 - +
  • [10] The bias-variance decomposition in profiled attacks
    Lerman, Liran
    Bontempi, Gianluca
    Markowitch, Olivier
    JOURNAL OF CRYPTOGRAPHIC ENGINEERING, 2015, 5 (04) : 255 - 267