Multi-task learning to rank for web search

被引:7
|
作者
Chang, Yi [1 ]
Bai, Jing [2 ]
Zhou, Ke [3 ]
Xue, Gui-Rong [4 ]
Zha, Hongyuan [3 ]
Zheng, Zhaohui [1 ]
机构
[1] Yahoo Labs, Sunnyvale, CA 94089 USA
[2] Microsoft Bing, Redmond, WA USA
[3] Georgia Inst Technol, Atlanta, GA 30332 USA
[4] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
关键词
Multi-task learning; Non-parametric common structure; Learning to rank; Convergence analysis;
D O I
10.1016/j.patrec.2011.09.020
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Both the quality and quantity of training data have significant impact on the accuracy of rank functions in web search. With the global search needs, a commercial search engine is required to expand its well tailored service to small countries as well. Due to heterogeneous intrinsic of query intents and search results on different domains (i.e., for different languages and regions), it is difficult for a generic ranking function to satisfy all type of queries. Instead, each domain should use a specific well tailored ranking function. In order to train each ranking function for each domain with a scalable strategy, it is critical to leverage existing training data to enhance the ranking functions of those domains without sufficient training data. In this paper, we present a boosting framework for learning to rank in the multi-task learning context to attack this problem. In particular. we propose to learn non-parametric common structures adaptively from multiple tasks in a stage-wise way. An algorithm is developed to iteratively discover super-features that are effective for all the tasks. The estimation of the regression function for each task is then learned as linear combination of those super-features. We evaluate the accuracy of multi-task learning methods for web search ranking using data from multiple domains from a commercial search engine. Our results demonstrate that multi-task learning methods bring significant relevance improvements over existing baseline method. (C) 2011 Elsevier B.V. All rights reserved.
引用
收藏
页码:173 / 181
页数:9
相关论文
共 50 条
  • [41] Calibrated Multi-Task Learning
    Nie, Feiping
    Hu, Zhanxuan
    Li, Xuelong
    KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2012 - 2021
  • [42] An overview of multi-task learning
    Yu Zhang
    Qiang Yang
    NationalScienceReview, 2018, 5 (01) : 30 - 43
  • [43] Boosted multi-task learning
    Chapelle, Olivier
    Shivaswamy, Pannagadatta
    Vadrevu, Srinivas
    Weinberger, Kilian
    Zhang, Ya
    Tseng, Belle
    MACHINE LEARNING, 2011, 85 (1-2) : 149 - 173
  • [44] Distributed Multi-Task Learning
    Wang, Jialei
    Kolar, Mladen
    Srebro, Nathan
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 51, 2016, 51 : 751 - 760
  • [45] Parallel Multi-Task Learning
    Zhang, Yu
    2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 629 - 638
  • [46] Learning Sparse Task Relations in Multi-Task Learning
    Zhang, Yu
    Yang, Qiang
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2914 - 2920
  • [47] Survey of Multi-Task Learning
    Zhang Y.
    Liu J.-W.
    Zuo X.
    1600, Science Press (43): : 1340 - 1378
  • [48] A Survey on Multi-Task Learning
    Zhang, Yu
    Yang, Qiang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (12) : 5586 - 5609
  • [49] Learning to rank and predict: Multi-task learning for ad hoc retrieval and query performance prediction
    Khodabakhsh, Maryam
    Bagheri, Ebrahim
    INFORMATION SCIENCES, 2023, 639
  • [50] Learning Multi-Level Task Groups in Multi-Task Learning
    Han, Lei
    Zhang, Yu
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 2638 - 2644