An Adaptive Entire-Space Multi-Scenario Multi-Task Transfer Learning Model for Recommendations

被引:0
|
作者
Yi, Qingqing [1 ,2 ]
Tang, Jingjing [1 ,2 ]
Zhao, Xiangyu [3 ]
Zeng, Yujian [4 ]
Song, Zengchun [4 ]
Wu, Jia [5 ]
机构
[1] Southwestern Univ Finance & Econ, Sch Business Adm, Fac Business Adm, Chengdu 610074, Peoples R China
[2] Southwestern Univ Finance & Econ, Big Data Lab Financial Secur & Behav, Lab Philosophy & Social Sci, Minist Educ, Chengdu 610074, Peoples R China
[3] City Univ Hong Kong, Hong Kong, Peoples R China
[4] Tencent Grp, Shenzhen 518054, Peoples R China
[5] Macquarie Univ, Sch Comp, Sydney, NSW 2113, Australia
基金
中国国家自然科学基金;
关键词
Multitasking; Logic gates; Data models; Feature extraction; Registers; Training; Adaptation models; Stars; Poles and towers; Data mining; Recommendation systems; multi-scenario learning; multi-task learning; post-impression behavior decomposition; personalization;
D O I
10.1109/TKDE.2025.3536334
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-scenario and multi-task recommendation systems efficiently facilitate knowledge transfer across different scenarios and tasks. However, many existing approaches inadequately incorporate personalized information across users and scenarios. Moreover, the conversion rate (CVR) task in multi-task learning often encounters challenges like sample selection bias, resulting from systematic differences between the training and inference sample spaces, and data sparsity due to infrequent clicks. To address these issues, we propose Adaptive Entire-space Multi-scenario Multi-task Transfer Learning model (AEM(2)TL) with four key modules: 1) Scenario-CGC (Scenario-Customized Gate Control), 2) Task-CGC (Task-Customized Gate Control), 3) Personalized Gating Network, and 4) Entire-space Supervised Multi-Task Module. AEM(2)TL employs a multi-gate mechanism to effectively integrate shared and specific information across scenarios and tasks, enhancing prediction adaptability. To further improve task-specific personalization, it incorporates personalized prior features and applies a gating mechanism that dynamically scales the top-layer neural units. A novel post-impression behavior decomposition technique is designed to leverage all impression samples across the entire space, mitigating sample selection bias and data sparsity. Furthermore, an adaptive weighting mechanism dynamically allocates attention to tasks based on their relative importance, ensuring optimal task prioritization. Extensive experiments on one industrial and two real-world public datasets indicate the superiority of AEM(2)TL over state-of-the-art methods.
引用
收藏
页码:1585 / 1598
页数:14
相关论文
共 50 条
  • [41] Multi-Adaptive Optimization for multi-task learning with deep neural networks
    Hervella, alvaro S.
    Rouco, Jose
    Novo, Jorge
    Ortega, Marcos
    NEURAL NETWORKS, 2024, 170 : 254 - 265
  • [42] Multi-stage Multi-task feature learning via adaptive threshold
    Fan, Ya-Ru
    Wang, Yilun
    Huang, Ting-Zhu
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 1665 - 1670
  • [43] Multi-task learning とmulti-stream monocular depth estimation using integrated model with multi-task learning and multi-stream
    Takamine, Michiru
    Endo, Satoshi
    Transactions of the Japanese Society for Artificial Intelligence, 2021, 36 (05): : 1 - 9
  • [44] A Novel Multi-Task driven Model for Personalized Paper Recommendations
    Zhou, Jingya
    Wang, Jie
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT 3, 2025, 14852 : 19 - 34
  • [45] Deep Model Based Transfer and Multi-Task Learning for Biological Image Analysis
    Zhang, Wenlu
    Li, Rongjian
    Zeng, Tao
    Sun, Qian
    Kumar, Sudhir
    Ye, Jieping
    Ji, Shuiwang
    1600, Institute of Electrical and Electronics Engineers Inc., United States (06): : 322 - 333
  • [46] Deep Model Based Transfer and Multi-Task Learning for Biological Image Analysis
    Zhang, Wenlu
    Li, Rongjian
    Zeng, Tao
    Sun, Qian
    Kumar, Sudhir
    Ye, Jieping
    Ji, Shuiwang
    KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 1475 - 1484
  • [47] Deep Model Based Transfer and Multi-Task Learning for Biological Image Analysis
    Zhang, Wenlu
    Li, Rongjian
    Zeng, Tao
    Sun, Qian
    Kumar, Sudhir
    Ye, Jieping
    Ji, Shuiwang
    IEEE TRANSACTIONS ON BIG DATA, 2020, 6 (02) : 322 - 333
  • [48] A feature transfer enabled multi-task deep learning model on medical imaging
    Gao, Fei
    Yoon, Hyunsoo
    Wu, Teresa
    Chu, Xianghua
    EXPERT SYSTEMS WITH APPLICATIONS, 2020, 143
  • [49] Evolutionary Multi-Task Optimization With Adaptive Intensity of Knowledge Transfer
    Zhou, Xinyu
    Mei, Neng
    Zhong, Maosheng
    Wang, Mingwen
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024,
  • [50] Learning Multi-Level Task Groups in Multi-Task Learning
    Han, Lei
    Zhang, Yu
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 2638 - 2644