A new transfer learning framework with application to model-agnostic multi-task learning

被引:7
|
作者
Gupta, Sunil [1 ]
Rana, Santu [1 ]
Saha, Budhaditya [1 ]
Phung, Dinh [1 ]
Venkatesh, Svetha [1 ]
机构
[1] Deakin Univ, Ctr Pattern Recognit & Data Analyt PRaDA, Geelong Waurn Ponds Campus, Waurn Ponds, Vic, Australia
关键词
Multi-task learning; Model-agnostic framework; Meta algorithm; Classification; Regression; CLASSIFICATION;
D O I
10.1007/s10115-016-0926-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from small number of examples is a challenging problem in machine learning. An effective way to improve the performance is through exploiting knowledge from other related tasks. Multi-task learning (MTL) is one such useful paradigm that aims to improve the performance through jointly modeling multiple related tasks. Although there exist numerous classification or regression models in machine learning literature, most of the MTL models are built around ridge or logistic regression. There exist some limited works, which propose multi-task extension of techniques such as support vector machine, Gaussian processes. However, all these MTL models are tied to specific classification or regression algorithms and there is no single MTL algorithm that can be used at a meta level for any given learning algorithm. Addressing this problem, we propose a generic, model-agnostic joint modeling framework that can take any classification or regression algorithm of a practitioner's choice (standard or custom-built) and build its MTL variant. The key observation that drives our framework is that due to small number of examples, the estimates of task parameters are usually poor, and we show that this leads to an under-estimation of task relatedness between any two tasks with high probability. We derive an algorithm that brings the tasks closer to their true relatedness by improving the estimates of task parameters. This is achieved by appropriate sharing of data across tasks. We provide the detail theoretical underpinning of the algorithm. Through our experiments with both synthetic and real datasets, we demonstrate that the multi-task variants of several classifiers/regressors (logistic regression, support vector machine, K-nearest neighbor, Random Forest, ridge regression, support vector regression) convincingly outperform their single-task counterparts. We also show that the proposed model performs comparable or better than many state-of-the-art MTL and transfer learning baselines.
引用
收藏
页码:933 / 973
页数:41
相关论文
共 50 条
  • [21] Model-agnostic multi-stage loss optimization meta learning
    Xiao Yao
    Jianlong Zhu
    Guanying Huo
    Ning Xu
    Xiaofeng Liu
    Ce Zhang
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 2349 - 2363
  • [22] Model-Agnostic Learning to Meta-Learn
    Devos, Arnout
    Dandi, Yatin
    NEURIPS 2020 WORKSHOP ON PRE-REGISTRATION IN MACHINE LEARNING, VOL 148, 2020, 148 : 155 - 175
  • [23] Bayesian Model-Agnostic Meta-Learning
    Yoon, Jaesik
    Kim, Taesup
    Dia, Ousmane
    Kim, Sungwoong
    Bengio, Yoshua
    Ahn, Sungjin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [24] Probabilistic Model-Agnostic Meta-Learning
    Finn, Chelsea
    Xu, Kelvin
    Levine, Sergey
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [25] Efficient Multi-Task and Transfer Reinforcement Learning With Parameter-Compositional Framework
    Sun, Lingfeng
    Zhang, Haichao
    Xu, Wei
    Tomizuka, Masayoshi
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (08): : 4569 - 4576
  • [26] A Model-Agnostic Randomized Learning Framework based on Random Hypothesis Subspace Sampling
    Cao, Yiting
    Lan, Chao
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [27] ROBUST MAML: PRIORITIZATION TASK BUFFER WITH ADAPTIVE LEARNING PROCESS FOR MODEL-AGNOSTIC META-LEARNING
    Thanh Nguyen
    Tung Luu
    Trung Pham
    Rakhimkul, Sanzhar
    Yoo, Chang D.
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3460 - 3464
  • [28] Model-Agnostic Multi-Agent Perception Framework
    Xu, Runsheng
    Chen, Weizhe
    Xiang, Hao
    Xia, Xin
    Liu, Lantao
    Ma, Jiaqi
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA, 2023, : 1471 - 1478
  • [29] Learning Task-Agnostic Embedding of Multiple Black-Box Experts for Multi-Task Model Fusion
    Trong Nghia Hoang
    Chi Thanh Lam
    Low, Bryan Kian Hsiang
    Jaillet, Patrick
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [30] A multi-task framework for metric learning with common subspace
    Yang, Peipei
    Huang, Kaizhu
    Liu, Cheng-Lin
    NEURAL COMPUTING & APPLICATIONS, 2013, 22 (7-8): : 1337 - 1347