Multi-Task Learning with Task-Specific Feature Filtering in Low-Data Condition

被引:2
|
作者
Lee, Sang-woo [1 ]
Lee, Ryong [2 ]
Seo, Min-seok [1 ]
Park, Jong-chan [3 ]
Noh, Hyeon-cheol [1 ]
Ju, Jin-gi [1 ]
Jang, Rae-young [2 ]
Lee, Gun-woo [2 ]
Choi, Myung-seok [2 ]
Choi, Dong-geol [1 ]
机构
[1] Hanbat Natl Univ, Dept Informat & Commun Engn, Daejeon 34014, South Korea
[2] Korea Inst Sci & Technol Informat KISTI, Dept Machine Learning Data Res, Daejeon 34141, South Korea
[3] Lunit Inc, Seoul 06241, South Korea
关键词
deep learning; multi-task learning; convolutional neural network; TIME SEMANTIC SEGMENTATION; NETWORK; MODEL;
D O I
10.3390/electronics10212691
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-task learning is a computationally efficient method to solve multiple tasks in one multi-task model, instead of multiple single-task models. MTL is expected to learn both diverse and shareable visual features from multiple datasets. However, MTL performances usually do not outperform single-task learning. Recent MTL methods tend to use heavy task-specific heads with large overheads to generate task-specific features. In this work, we (1) validate the efficacy of MTL in low-data conditions with early-exit architectures, and (2) propose a simple feature filtering module with minimal overheads to generate task-specific features. We assume that, in low-data conditions, the model cannot learn useful low-level features due to the limited amount of data. We empirically show that MTL can significantly improve performances in all tasks under low-data conditions. We further optimize the early-exit architecture by a sweep search on the optimal feature for each task. Furthermore, we propose a feature filtering module that selects features for each task. Using the optimized early-exit architecture with the feature filtering module, we improve the 15.937% in ImageNet and 4.847% in Places365 under the low-data condition where only 5% of the original datasets are available. Our method is empirically validated in various backbones and various MTL settings.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Finding Task-specific Subnetworks in Multi-task Spoken Language Understanding Model
    Futami, Hayato
    Arora, Siddhant
    Kashiwagi, Yosuke
    Tsunoo, Emiru
    Watanabe, Shinji
    INTERSPEECH 2024, 2024, : 802 - 806
  • [22] Feature Evolution Based Multi-Task Learning for Collaborative Filtering with Social Trust
    Wu, Qitian
    Jiang, Lei
    Gao, Xiaofeng
    Yang, Xiaochun
    Chen, Guihai
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3877 - 3883
  • [23] Multi-Task Learning with Group-Specific Feature Space Sharing
    Yousefi, Niloofar
    Georgiopoulos, Michael
    Anagnostopoulos, Georgios C.
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2015, PT II, 2015, 9285 : 120 - 136
  • [24] Multi-task Attribute Joint Feature Learning
    Chang, Lu
    Fang, Yuchun
    Jiang, Xiaoda
    BIOMETRIC RECOGNITION, CCBR 2015, 2015, 9428 : 193 - 200
  • [25] Prototype Feature Extraction for Multi-task Learning
    Xin, Shen
    Jiao, Yuhang
    Long, Cheng
    Wang, Yuguang
    Wang, Xiaowei
    Yang, Sen
    Liu, Ji
    Zhang, Jie
    PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 2472 - 2481
  • [26] Multi-task Feature Learning for Social Recommendation
    Zhang, Yuanyuan
    Sun, Maosheng
    Zhang, Xiaowei
    Zhang, Yonglong
    KNOWLEDGE GRAPH AND SEMANTIC COMPUTING: KNOWLEDGE GRAPH EMPOWERS NEW INFRASTRUCTURE CONSTRUCTION, 2021, 1466 : 240 - 252
  • [27] Deep Asymmetric Multi-task Feature Learning
    Lee, Hae Beom
    Yang, Eunho
    Hwang, Sung Ju
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [28] Multi-Task Model and Feature Joint Learning
    Li, Ya
    Tian, Xinmei
    Liu, Tongliang
    Tao, Dacheng
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3643 - 3649
  • [29] Efficient Multi-Task Feature Learning with Calibration
    Gong, Pinghua
    Zhou, Jiayu
    Fan, Wei
    Ye, Jieping
    PROCEEDINGS OF THE 20TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'14), 2014, : 761 - 770
  • [30] Multi-Stage Multi-Task Feature Learning
    Gong, Pinghua
    Ye, Jieping
    Zhang, Changshui
    JOURNAL OF MACHINE LEARNING RESEARCH, 2013, 14 : 2979 - 3010