Revisiting Two-tower Models for Unbiased Learning to Rank

被引:9
|
作者
Yan, Le [1 ]
Qin, Zhen [1 ]
Zhuang, Honglei [1 ]
Wang, Xuanhui [1 ]
Bendersky, Michael [1 ]
Najork, Marc [1 ]
机构
[1] Google, Mountain View, CA 94043 USA
关键词
Unbiased Learning to Rank; Expectation Maximization; Bias Factorization;
D O I
10.1145/3477495.3531837
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Two-tower architecture is commonly used in real-world systems for Unbiased Learning to Rank ( ULTR), where a Deep Neural Network (DNN) tower models unbiased relevance predictions, while another tower models observation biases inherent in the training data like user clicks. This two-tower architecture introduces inductive biases to allow more efficient use of limited observational logs and better generalization during deployment than single-tower architecture that may learn spurious correlations between relevance predictions and biases. However, despite their popularity, it is largely neglected in the literature that existing two-tower models assume that the joint distribution of relevance prediction and observation probabilities are completely factorizable. In this work, we revisit two-tower models for ULTR. We rigorously show that the factorization assumption can be too strong for real-world user behaviors, and existing methods may easily fail under slightly milder assumptions. We then propose several novel ideas that consider a wider spectrum of user behaviors while still under the two-tower framework to maintain simplicity and generalizability. Our concerns of existing two-tower models and the effectiveness of our proposed methods are validated on both controlled synthetic and large-scale real-world datasets.
引用
收藏
页码:2410 / 2414
页数:5
相关论文
共 50 条
  • [21] Unbiased Learning-to-Rank with Biased Feedback
    Joachims, Thorsten
    Swaminathan, Adith
    Schnabel, Tobias
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 5284 - 5288
  • [22] Unbiased Learning to Rank Based on Relevance Correction
    Wang Y.
    Lan Y.
    Pang L.
    Guo J.
    Cheng X.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2022, 59 (12): : 2867 - 2877
  • [23] Unbiased Learning to Rank: Counterfactual and Online Approaches
    Oosterhuis, Harrie
    Jagerman, Rolf
    de Rijke, Maarten
    WWW'20: COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2020, 2020, : 299 - 300
  • [24] Unbiased Learning to Rank with Biased Continuous Feedback
    Ren, Yi
    Tang, Hongyan
    Zhu, Siwen
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 1716 - 1725
  • [25] Unbiased Learning-to-Rank with Biased Feedback
    Joachims, Thorsten
    Swaminathan, Adith
    Schnabel, Tobias
    WSDM'17: PROCEEDINGS OF THE TENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2017, : 781 - 789
  • [26] A General Framework for Pairwise Unbiased Learning to Rank
    Kurennoy, Alexey
    Coleman, John
    Harris, Ian
    Lynch, Alice
    Mac Fhearai, Oisin
    Tsatsoulis, Daphne
    PROCEEDINGS OF THE 2022 ACM SIGIR INTERNATIONAL CONFERENCE ON THE THEORY OF INFORMATION RETRIEVAL, ICTIR 2022, 2022, : 115 - 124
  • [27] ULTRA: An Unbiased Learning To Rank Algorithm Toolbox
    Tran, Anh
    Yang, Tao
    Ai, Qingyao
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4613 - 4622
  • [28] A Two-Tower Spatial-Temporal Graph Neural Network for Traffic Speed Prediction
    Shen, Yansong
    Li, Lin
    Xie, Qing
    Li, Xin
    Xu, Guandong
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT I, 2022, 13280 : 406 - 418
  • [29] Touchformer: A Transformer-Based Two-Tower Architecture for Tactile Temporal Signal Classification
    Liu, Chongyu
    Liu, Hong
    Chen, Hu
    Du, Wenchao
    Yang, Hongyu
    IEEE TRANSACTIONS ON HAPTICS, 2024, 17 (03) : 396 - 404
  • [30] Unbiased learning for hierarchical models
    Sekino, Masashi
    Nitta, Katsumi
    2007 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-6, 2007, : 575 - 580