Multi-Task Learning with Task-Specific Feature Filtering in Low-Data Condition

被引:2
|
作者
Lee, Sang-woo [1 ]
Lee, Ryong [2 ]
Seo, Min-seok [1 ]
Park, Jong-chan [3 ]
Noh, Hyeon-cheol [1 ]
Ju, Jin-gi [1 ]
Jang, Rae-young [2 ]
Lee, Gun-woo [2 ]
Choi, Myung-seok [2 ]
Choi, Dong-geol [1 ]
机构
[1] Hanbat Natl Univ, Dept Informat & Commun Engn, Daejeon 34014, South Korea
[2] Korea Inst Sci & Technol Informat KISTI, Dept Machine Learning Data Res, Daejeon 34141, South Korea
[3] Lunit Inc, Seoul 06241, South Korea
关键词
deep learning; multi-task learning; convolutional neural network; TIME SEMANTIC SEGMENTATION; NETWORK; MODEL;
D O I
10.3390/electronics10212691
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-task learning is a computationally efficient method to solve multiple tasks in one multi-task model, instead of multiple single-task models. MTL is expected to learn both diverse and shareable visual features from multiple datasets. However, MTL performances usually do not outperform single-task learning. Recent MTL methods tend to use heavy task-specific heads with large overheads to generate task-specific features. In this work, we (1) validate the efficacy of MTL in low-data conditions with early-exit architectures, and (2) propose a simple feature filtering module with minimal overheads to generate task-specific features. We assume that, in low-data conditions, the model cannot learn useful low-level features due to the limited amount of data. We empirically show that MTL can significantly improve performances in all tasks under low-data conditions. We further optimize the early-exit architecture by a sweep search on the optimal feature for each task. Furthermore, we propose a feature filtering module that selects features for each task. Using the optimized early-exit architecture with the feature filtering module, we improve the 15.937% in ImageNet and 4.847% in Places365 under the low-data condition where only 5% of the original datasets are available. Our method is empirically validated in various backbones and various MTL settings.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Multi-stage multi-task feature learning
    Gong, Pinghua
    Ye, Jieping
    Zhang, Changshui
    Journal of Machine Learning Research, 2013, 14 : 2979 - 3010
  • [32] Multi-Task Learning Using Shared and Task Specific Information
    Srijith, P. K.
    Shevade, Shirish
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT III, 2012, 7665 : 125 - 132
  • [33] Structured feature selection and task relationship inference for multi-task learning
    Hongliang Fei
    Jun Huan
    Knowledge and Information Systems, 2013, 35 : 345 - 364
  • [34] Structured feature selection and task relationship inference for multi-task learning
    Fei, Hongliang
    Huan, Jun
    KNOWLEDGE AND INFORMATION SYSTEMS, 2013, 35 (02) : 345 - 364
  • [35] Task-Specific Pruning: Efficient Parameter Reduction in Multi-task Object Detection Models
    Ke, Wei-Hsun
    Tseng, Yu-Wen
    Cheng, Wen-Huang
    2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2023, : 1712 - 1717
  • [36] Multi-task gradient descent for multi-task learning
    Lu Bai
    Yew-Soon Ong
    Tiantian He
    Abhishek Gupta
    Memetic Computing, 2020, 12 : 355 - 369
  • [37] Multi-task gradient descent for multi-task learning
    Bai, Lu
    Ong, Yew-Soon
    He, Tiantian
    Gupta, Abhishek
    MEMETIC COMPUTING, 2020, 12 (04) : 355 - 369
  • [38] Safe Screening for Multi-Task Feature Learning with Multiple Data Matrices
    Wang, Jie
    Ye, Jieping
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 37, 2015, 37 : 1747 - 1756
  • [39] Multi-Task GANs for View-Specific Feature Learning in Gait Recognition
    He, Yiwei
    Zhang, Junping
    Shan, Hongming
    Wang, Liang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2019, 14 (01) : 102 - 113
  • [40] Channel Attention-Based Method for Searching Task-Specific Multi-Task Network Structures
    Ye, Songtao
    Zheng, Saisai
    Xiao, Yizhang
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 562 - 569