Robust Ranking Model via Bias-Variance Optimization

被引:0
|
作者
Li, Jinzhong [1 ,2 ,3 ,4 ]
Liu, Guanjun [3 ,4 ]
Xia, Jiewu [1 ,2 ]
机构
[1] Jinggangshan Univ, Dept Comp Sci & Technol, Jian 343009, Jiangxi, Peoples R China
[2] Univ Elect Sci & Technol China, Network & Data Secur Key Lab Sichuan Prov, Chengdu 610054, Sichuan, Peoples R China
[3] Tongji Univ, Dept Comp Sci & Technol, Shanghai 201804, Peoples R China
[4] Tongji Univ, Minist Educ, Key Lab Embedded Syst & Serv Comp, Shanghai 201804, Peoples R China
关键词
Learning to rank; Ranking model; Effectiveness-robustness tradeoff; Bias-variance tradeoff; LambdaMART algorithm;
D O I
10.1007/978-3-319-63315-2_62
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Improving average effectiveness is an objective of paramount importance of ranking model for the learning to rank task. Another equally important objective is the robustness-a ranking model should minimize the variance of effectiveness across all queries when the ranking model is disturbed. However, most of the existing learning to rank methods are optimizing the average effectiveness over all the queries, and leaving robustness unnoticed. An ideal ranking model is expected to balance the trade-off between effectiveness and robustness by achieving high average effectiveness and low variance of effectiveness. This paper investigates the effectiveness-robustness trade-off in learning to rank from a novel perspective, i.e., the bias-variance trade-off, and presents a unified objective function which captures the trade-off between these two competing measures for jointly optimizing the effectiveness and robustness of ranking model. We modify the gradient based on the unified objective function using LambdaMART which is a state-of-the-art learning to rank algorithm, and demonstrate the strategy of jointly optimizing the combination of bias and variance in a principled learning objective. Experimental results demonstrate that the gradient-modified LambdaMART improves the robustness and normalized effectiveness of ranking model by combining bias and variance.
引用
收藏
页码:706 / 718
页数:13
相关论文
共 50 条
  • [21] The bias-variance dilemma of the Monte Carlo method
    Mark, Z
    Baram, Y
    ARTIFICIAL NEURAL NETWORKS-ICANN 2001, PROCEEDINGS, 2001, 2130 : 141 - 147
  • [22] What causes the test error? going beyond bias-variance via anova
    Lin, Licong
    Dobriban, Edgar
    Journal of Machine Learning Research, 2021, 22
  • [23] Bias-Variance Tradeoffs in Recombination Rate Estimation
    Stone, Eric A.
    Singh, Nadia D.
    GENETICS, 2016, 202 (02) : 857 - 859
  • [24] On Bias-Variance Analysis for Probabilistic Logic Models
    Lodhi, Huma
    JOURNAL OF INFORMATION TECHNOLOGY RESEARCH, 2008, 1 (03) : 27 - 40
  • [25] On the Properties of Bias-Variance Decomposition for kNN Regression
    Nedel'ko, Victor M.
    BULLETIN OF IRKUTSK STATE UNIVERSITY-SERIES MATHEMATICS, 2023, 43 : 110 - 121
  • [26] Towards Costless Model Selection in Contextual Bandits: A Bias-Variance Perspective
    Krishnamurthy, Sanath Kumar
    Propp, Adrienne Margaret
    Athey, Susan
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [27] Bias-variance trade-off for prequential model list selection
    Fokoue, Ernest
    Clarke, Bertrand
    STATISTICAL PAPERS, 2011, 52 (04) : 813 - 833
  • [28] Bias-Variance Tradeoff of Graph Laplacian Regularizer
    Chen, Pin-Yu
    Liu, Sijia
    IEEE SIGNAL PROCESSING LETTERS, 2017, 24 (08) : 1118 - 1122
  • [29] Applications of the bias-variance decomposition to human forecasting
    Kane, Patrick Bodilly
    Broomell, Stephen B.
    JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2020, 98
  • [30] BIAS-VARIANCE TRADEOFFS IN FUNCTIONAL ESTIMATION PROBLEMS
    LOW, MG
    ANNALS OF STATISTICS, 1995, 23 (03): : 824 - 835