Multi-Task Learning with Task-Specific Feature Filtering in Low-Data Condition

被引:2
|
作者
Lee, Sang-woo [1 ]
Lee, Ryong [2 ]
Seo, Min-seok [1 ]
Park, Jong-chan [3 ]
Noh, Hyeon-cheol [1 ]
Ju, Jin-gi [1 ]
Jang, Rae-young [2 ]
Lee, Gun-woo [2 ]
Choi, Myung-seok [2 ]
Choi, Dong-geol [1 ]
机构
[1] Hanbat Natl Univ, Dept Informat & Commun Engn, Daejeon 34014, South Korea
[2] Korea Inst Sci & Technol Informat KISTI, Dept Machine Learning Data Res, Daejeon 34141, South Korea
[3] Lunit Inc, Seoul 06241, South Korea
关键词
deep learning; multi-task learning; convolutional neural network; TIME SEMANTIC SEGMENTATION; NETWORK; MODEL;
D O I
10.3390/electronics10212691
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-task learning is a computationally efficient method to solve multiple tasks in one multi-task model, instead of multiple single-task models. MTL is expected to learn both diverse and shareable visual features from multiple datasets. However, MTL performances usually do not outperform single-task learning. Recent MTL methods tend to use heavy task-specific heads with large overheads to generate task-specific features. In this work, we (1) validate the efficacy of MTL in low-data conditions with early-exit architectures, and (2) propose a simple feature filtering module with minimal overheads to generate task-specific features. We assume that, in low-data conditions, the model cannot learn useful low-level features due to the limited amount of data. We empirically show that MTL can significantly improve performances in all tasks under low-data conditions. We further optimize the early-exit architecture by a sweep search on the optimal feature for each task. Furthermore, we propose a feature filtering module that selects features for each task. Using the optimized early-exit architecture with the feature filtering module, we improve the 15.937% in ImageNet and 4.847% in Places365 under the low-data condition where only 5% of the original datasets are available. Our method is empirically validated in various backbones and various MTL settings.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Task-specific Compression for Multi-task Language Models using Attribution-based Pruning
    Yang, Nakyeong
    Jang, Yunah
    Lee, Hwanhee
    Jung, Seohyeong
    Jung, Kyomin
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 594 - 604
  • [42] Task Aware Feature Extraction Framework for Sequential Dependence Multi-Task Learning
    Tao, Xuewen
    Ha, Mingming
    Guo, Xiaobo
    Ma, Qiongxu
    Cheng, Hongwei
    Lin, Wenfang
    Cheng, Linxun
    Han, Bing
    PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 151 - 160
  • [43] Exploiting Task-Feature Co-Clusters in Multi-Task Learning
    Xu, Linli
    Huang, Aiqing
    Chen, Jianhui
    Chen, Enhong
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 1931 - 1937
  • [44] NONPARAMETRIC BAYESIAN FEATURE SELECTION FOR MULTI-TASK LEARNING
    Li, Hui
    Liao, Xuejun
    Carin, Lawrence
    2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2011, : 2236 - 2239
  • [45] Unsupervised Multi-Task Feature Learning on Point Clouds
    Hassani, Kaveh
    Haley, Mike
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8159 - 8170
  • [46] Probabilistic Joint Feature Selection for Multi-task Learning
    Xiong, Tao
    Bi, Jinbo
    Rao, Bharat
    Cherkassky, Vladimir
    PROCEEDINGS OF THE SEVENTH SIAM INTERNATIONAL CONFERENCE ON DATA MINING, 2007, : 332 - +
  • [47] Multi-Task Adversarial Network for Disentangled Feature Learning
    Liu, Yang
    Wang, Zhaowen
    Jin, Hailin
    Wassell, Ian
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3743 - 3751
  • [48] Adversarial task-specific learning
    Fu, Xin
    Zhao, Yao
    Liu, Ting
    Wei, Yunchao
    Li, Jianan
    Wei, Shikui
    NEUROCOMPUTING, 2019, 362 : 118 - 128
  • [49] Efficient Multi-Task Auxiliary Learning: Selecting Auxiliary Data by Feature Similarity
    Kung, Po-Nien
    Chen, Yi-Cheng
    Yin, Sheng-Siang
    Yang, Tse-Hsuan
    Chen, Yun-Nung
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 416 - 428
  • [50] Feature Extraction of Laser Machining Data by Using Deep Multi-Task Learning
    Zhang, Quexuan
    Wang, Zexuan
    Wang, Bin
    Ohsawa, Yukio
    Hayashi, Teruaki
    INFORMATION, 2020, 11 (08)