Multi-Task Learning with Prior Information

被引:0
|
作者
Zhang, Mengyuan [1 ]
Liu, Kai [1 ]
机构
[1] Clemson Univ, Sch Comp, Clemson, SC 29634 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-task learning aims to boost the generalization performance of multiple related tasks simultaneously by leveraging information contained in those tasks. In this paper, we propose a multi-task learning framework, where we utilize prior knowledge in the relations between features. We also impose a penalty on the coefficients changing for each specific feature to ensure related tasks have similar coefficients on common features shared among them. In addition, we capture a common set of features via group sparsity. The objective is formulated as a non-smooth convex optimization problem, which can be solved with various methods, including (sub)gradient descent method, iterative shrinkage-thresholding algorithm (ISTA) with back-tracking, and its momentum variation - fast iterative shrinkage-thresholding algorithm (FISTA). In light of the sublinear convergence rate of the methods aforementioned, we propose an asymptotically linear convergent algorithm with theoretical guarantee. Empirical experiments on both regression and classification tasks with real-world datasets demonstrate that our proposed algorithms are capable of improving the generalization performance of multiple related tasks.
引用
收藏
页码:586 / 594
页数:9
相关论文
共 50 条
  • [21] Boosted multi-task learning
    Olivier Chapelle
    Pannagadatta Shivaswamy
    Srinivas Vadrevu
    Kilian Weinberger
    Ya Zhang
    Belle Tseng
    Machine Learning, 2011, 85 : 149 - 173
  • [22] An overview of multi-task learning
    Zhang, Yu
    Yang, Qiang
    NATIONAL SCIENCE REVIEW, 2018, 5 (01) : 30 - 43
  • [23] On Partial Multi-Task Learning
    He, Yi
    Wu, Baijun
    Wu, Di
    Wu, Xindong
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1174 - 1181
  • [24] Pareto Multi-Task Learning
    Lin, Xi
    Zhen, Hui-Ling
    Li, Zhenhua
    Zhang, Qingfu
    Kwong, Sam
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [25] Federated Multi-Task Learning
    Smith, Virginia
    Chiang, Chao-Kai
    Sanjabi, Maziar
    Talwalkar, Ameet
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [26] Asynchronous Multi-Task Learning
    Baytas, Inci M.
    Yan, Ming
    Jain, Anil K.
    Zhou, Jiayu
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 11 - 20
  • [27] Calibrated Multi-Task Learning
    Nie, Feiping
    Hu, Zhanxuan
    Li, Xuelong
    KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2012 - 2021
  • [28] An overview of multi-task learning
    Yu Zhang
    Qiang Yang
    National Science Review, 2018, 5 (01) : 30 - 43
  • [29] Boosted multi-task learning
    Chapelle, Olivier
    Shivaswamy, Pannagadatta
    Vadrevu, Srinivas
    Weinberger, Kilian
    Zhang, Ya
    Tseng, Belle
    MACHINE LEARNING, 2011, 85 (1-2) : 149 - 173
  • [30] Distributed Multi-Task Learning
    Wang, Jialei
    Kolar, Mladen
    Srebro, Nathan
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 51, 2016, 51 : 751 - 760