On Partial Multi-Task Learning

被引:5
|
作者
He, Yi [1 ]
Wu, Baijun [1 ]
Wu, Di [2 ]
Wu, Xindong [3 ,4 ]
机构
[1] Univ Louisiana Lafayette, Sch Comp & Informat, Lafayette, LA 70504 USA
[2] Chinese Acad Sci, Chongqing Inst Green & Intelligent Technol, Beijing, Peoples R China
[3] Mininglamp Acad Sci, Mininglamp Technol, Beijing, Peoples R China
[4] Hefei Univ Technol, Minist Educ, Key Lab Knowledge Engn Big Data, Hefei, Peoples R China
基金
美国国家科学基金会;
关键词
MATRIX COMPLETION; CLASSIFICATION;
D O I
10.3233/FAIA200216
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-Task Learning (MTL) has shown its effectiveness in real applications where many related tasks could be handled together. Existing MTL methods make predictions for multiple tasks based on the data examples of the corresponding tasks. However, the data examples of some tasks are expensive or time-consuming to collect in practice, which reduces the applicability of MTL. For example, a patient may be asked to provide her microtome test reports and MRI images for illness diagnosis in MTL-based system [37,40]. It would be valuable if MTL can predict the abnormalities for such medical tests by feeding with some easy-to-collect data examples from other related tests instead of directly collecting data examples from them. We term such a new paradigm as multi-task learning from partial examples. The challenges of partial multi-task learning are twofold. First, the data examples from different tasks may be represented in different feature spaces. Second, the data examples could be incomplete for predicting the labels of all tasks. To overcome these challenges, we in this paper propose a novel algorithm, named Generative Learning with Partial Multi-Tasks (GPMT). The key idea of GPMT is to discover a shared latent feature space that harmonizes disparate feature information of multiple tasks. Given a partial example, the information contained in its missing feature representations is recovered by projecting it onto the latent space. A learner trained on the latent space then enjoys complete information included in the original features and the recovered missing features, and thus can predict the labels for the partial examples. Our theoretical analysis shows that the GPMT guarantees a performance gain comparing with training an individual learner for each task. Extensive experiments demonstrate the superiority of GPMT on both synthetic and real datasets.
引用
收藏
页码:1174 / 1181
页数:8
相关论文
共 50 条
  • [21] MULTI-TASK DISTILLATION: TOWARDS MITIGATING THE NEGATIVE TRANSFER IN MULTI-TASK LEARNING
    Meng, Ze
    Yao, Xin
    Sun, Lifeng
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 389 - 393
  • [22] Task Variance Regularized Multi-Task Learning
    Mao, Yuren
    Wang, Zekai
    Liu, Weiwei
    Lin, Xuemin
    Hu, Wenbin
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (08) : 8615 - 8629
  • [23] Task Switching Network for Multi-task Learning
    Sun, Guolei
    Probst, Thomas
    Paudel, Danda Pani
    Popovic, Nikola
    Kanakis, Menelaos
    Patel, Jagruti
    Dai, Dengxin
    Van Gool, Luc
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 8271 - 8280
  • [24] Multi-Task Multi-Sample Learning
    Aytar, Yusuf
    Zisserman, Andrew
    COMPUTER VISION - ECCV 2014 WORKSHOPS, PT III, 2015, 8927 : 78 - 91
  • [25] Learning Task Relational Structure for Multi-Task Feature Learning
    Wang, De
    Nie, Feiping
    Huang, Heng
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 1239 - 1244
  • [26] Learning Task Relatedness in Multi-Task Learning for Images in Context
    Strezoski, Gjorgji
    van Noord, Nanne
    Worring, Marcel
    ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, : 78 - 86
  • [27] Learning Tree Structure in Multi-Task Learning
    Han, Lei
    Zhang, Yu
    KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 397 - 406
  • [28] Learning to Resolve Conflicts in Multi-Task Learning
    Tang, Min
    Jin, Zhe
    Zou, Lixin
    Liang Shiuan-Ni
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 477 - 489
  • [29] Multi-task Learning with Modular Reinforcement Learning
    Xue, Jianyong
    Alexandre, Frederic
    FROM ANIMALS TO ANIMATS 16, 2022, 13499 : 127 - 138
  • [30] Hierarchical Prompt Learning for Multi-Task Learning
    Liu, Yajing
    Lu, Yuning
    Liu, Hao
    An, Yaozu
    Xu, Zhuoran
    Yao, Zhuokun
    Zhang, Baofeng
    Xiong, Zhiwei
    Gui, Chenguang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10888 - 10898