On Partial Multi-Task Learning

被引:5
|
作者
He, Yi [1 ]
Wu, Baijun [1 ]
Wu, Di [2 ]
Wu, Xindong [3 ,4 ]
机构
[1] Univ Louisiana Lafayette, Sch Comp & Informat, Lafayette, LA 70504 USA
[2] Chinese Acad Sci, Chongqing Inst Green & Intelligent Technol, Beijing, Peoples R China
[3] Mininglamp Acad Sci, Mininglamp Technol, Beijing, Peoples R China
[4] Hefei Univ Technol, Minist Educ, Key Lab Knowledge Engn Big Data, Hefei, Peoples R China
基金
美国国家科学基金会;
关键词
MATRIX COMPLETION; CLASSIFICATION;
D O I
10.3233/FAIA200216
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-Task Learning (MTL) has shown its effectiveness in real applications where many related tasks could be handled together. Existing MTL methods make predictions for multiple tasks based on the data examples of the corresponding tasks. However, the data examples of some tasks are expensive or time-consuming to collect in practice, which reduces the applicability of MTL. For example, a patient may be asked to provide her microtome test reports and MRI images for illness diagnosis in MTL-based system [37,40]. It would be valuable if MTL can predict the abnormalities for such medical tests by feeding with some easy-to-collect data examples from other related tests instead of directly collecting data examples from them. We term such a new paradigm as multi-task learning from partial examples. The challenges of partial multi-task learning are twofold. First, the data examples from different tasks may be represented in different feature spaces. Second, the data examples could be incomplete for predicting the labels of all tasks. To overcome these challenges, we in this paper propose a novel algorithm, named Generative Learning with Partial Multi-Tasks (GPMT). The key idea of GPMT is to discover a shared latent feature space that harmonizes disparate feature information of multiple tasks. Given a partial example, the information contained in its missing feature representations is recovered by projecting it onto the latent space. A learner trained on the latent space then enjoys complete information included in the original features and the recovered missing features, and thus can predict the labels for the partial examples. Our theoretical analysis shows that the GPMT guarantees a performance gain comparing with training an individual learner for each task. Extensive experiments demonstrate the superiority of GPMT on both synthetic and real datasets.
引用
收藏
页码:1174 / 1181
页数:8
相关论文
共 50 条
  • [31] Multi-task learning for gland segmentation
    Rezazadeh, Iman
    Duygulu, Pinar
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (01) : 1 - 9
  • [32] Editorial Note: Multi-Task Learning
    Zhu, Yingying
    Zhang, Shichao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (22) : 29207 - 29207
  • [33] Gradient Surgery for Multi-Task Learning
    Yu, Tianhe
    Kumar, Saurabh
    Gupta, Abhishek
    Levine, Sergey
    Hausman, Karol
    Finn, Chelsea
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [34] Fitting and sharing multi-task learning
    Piao, Chengkai
    Wei, Jinmao
    APPLIED INTELLIGENCE, 2024, 54 (9-10) : 6918 - 6929
  • [35] A brief review on multi-task learning
    Kim-Han Thung
    Chong-Yaw Wee
    Multimedia Tools and Applications, 2018, 77 : 29705 - 29725
  • [36] Multi-task reinforcement learning in humans
    Momchil S. Tomov
    Eric Schulz
    Samuel J. Gershman
    Nature Human Behaviour, 2021, 5 : 764 - 773
  • [37] Exploring Multi-Task Learning for Explainability
    Charalampakos, Foivos
    Koutsopoulos, Iordanis
    ARTIFICIAL INTELLIGENCE-ECAI 2023 INTERNATIONAL WORKSHOPS, PT 1, XAI3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, 2023, 2024, 1947 : 349 - 365
  • [38] Convex multi-task feature learning
    Andreas Argyriou
    Theodoros Evgeniou
    Massimiliano Pontil
    Machine Learning, 2008, 73 : 243 - 272
  • [39] Manifold Regularized Multi-Task Learning
    Yang, Peipei
    Zhang, Xu-Yao
    Huang, Kaizhu
    Liu, Cheng-Lin
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT III, 2012, 7665 : 528 - 536
  • [40] Multi-task learning with deformable convolution
    Li, Jie
    Huang, Lei
    Wei, Zhiqiang
    Zhang, Wenfeng
    Qin, Qibing
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 77