Multi-task feature selection with sparse regularization to extract common and task-specific features

被引:16
|
作者
Zhang, Jiashuai [1 ]
Miao, Jianyu [2 ]
Zhao, Kun [3 ]
Tian, Yingjie [4 ,5 ,6 ]
机构
[1] Univ Chinese Acad Sci, Sch Math Sci, Beijing 100049, Peoples R China
[2] Henan Univ Technol, Coll Informat Sci & Engn, Zhengzhou 450001, Henan, Peoples R China
[3] Beijing Wuzi Univ, Sch Logist, Beijing 101149, Peoples R China
[4] Chinese Acad Sci, Res Ctr Fictitious Econ & Data Sci, Beijing 100190, Peoples R China
[5] Univ Chinese Acad Sci, Sch Econ & Management, Beijing 100190, Peoples R China
[6] Chinese Acad Sci, Key Lab Big Data Min & Knowledge Management, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-task feature learning; Sparse regularization; Non-convex; ADMM; VARIABLE SELECTION; REGRESSION; CLASSIFICATION;
D O I
10.1016/j.neucom.2019.02.035
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-task learning (MTL), with the help of the relationship among tasks, is able to improve the generalization performance of all tasks by learning multiple tasks simultaneously. Multi-task sparse feature learning, formulated under the regularization framework, is one of the main approaches for MTL. Hence, the regularization term is crucial for multi-task sparse feature models. While most of the existing models utilize convex sparse regularization, a non-convex capped-l(1) regularization is extended into MTL and proven as a powerful sparse term. In this paper, we propose a novel regularization term for multi-task learning by extending the non-convex l(1-2) regularization to multi-task learning. The regularization term can not only realize group sparsity to extract the common features shared by all tasks, but also learn task-specific features through the relaxation of the second term in regularization. Although the model formulation is similar to a proposed one for multi-class problem, we first extend l(1-2) regularization to multi-task learning so that both common features and task-specific features can be extracted. A classical multi-task learning model (Multi-task Feature Selection, MTFS) can be viewed as a special case of our proposed model. Due to the complexity of regularization, we approximate the original problem by a locally linear subproblem and then use the Alternating Direction Method of Multipliers (ADMM) to solve this subproblem. The theoretical analysis shows the convergence of the proposed algorithm and the time complexity of the algorithm is provided. Experimental results demonstrate the effectiveness of our proposed method. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:76 / 89
页数:14
相关论文
共 50 条
  • [31] Joint Structure Feature Exploration and Regularization for Multi-Task Graph Classification
    Pan, Shirui
    Wu, Jia
    Zhu, Xingquan
    Zhang, Chengqi
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2016, 28 (03) : 715 - 728
  • [32] Learning Sparse Task Relations in Multi-Task Learning
    Zhang, Yu
    Yang, Qiang
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2914 - 2920
  • [33] Regularization Parameter Selection for a Bayesian Group Sparse Multi-Task Regression Model with Application to Imaging Genomics
    Nathoo, Farouk S.
    Greenlaw, Keelin
    Lesperance, Mary
    2016 6TH INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION IN NEUROIMAGING (PRNI), 2016, : 9 - 12
  • [34] Channel Attention-Based Method for Searching Task-Specific Multi-Task Network Structures
    Ye, Songtao
    Zheng, Saisai
    Xiao, Yizhang
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 562 - 569
  • [35] Joint Structure Feature Exploration and Regularization for Multi-Task Graph Classification
    Pan, Shirui
    Wu, Jia
    Zhu, Xingquan
    Zhang, Chengqi
    Yu, Philip S.
    2016 32ND IEEE INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE), 2016, : 1474 - 1475
  • [36] Task-specific Compression for Multi-task Language Models using Attribution-based Pruning
    Yang, Nakyeong
    Jang, Yunah
    Lee, Hwanhee
    Jung, Seohyeong
    Jung, Kyomin
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 594 - 604
  • [37] Sparse Multi-Task Regression and Feature Selection to Identify Brain Imaging Predictors for Memory Performance
    Wang, Hua
    Nie, Feiping
    Huang, Heng
    Risacher, Shannon
    Ding, Chris
    Saykin, Andrew J.
    Shen, Li
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2011, : 557 - 562
  • [38] SC-LSTM: Learning Task-Specific Representations in Multi-Task Learning for Sequence Labeling
    Lu, Peng
    Bai, Ting
    Langlais, Philippe
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 2396 - 2406
  • [39] TSMV: TASK-SPECIFIC MULTI-VIEW FEATURE LEARNING
    Zhang, Chengyue
    Han, Yahong
    8TH INTERNATIONAL CONFERENCE ON INTERNET MULTIMEDIA COMPUTING AND SERVICE (ICIMCS2016), 2016, : 39 - 42
  • [40] Multi-subject brain decoding with multi-task feature selection
    Wang, Liye
    Tang, Xiaoying
    Liu, Weifeng
    Peng, Yuhua
    Gao, Tianxin
    Xu, Yong
    BIO-MEDICAL MATERIALS AND ENGINEERING, 2014, 24 (06) : 2987 - 2994