Transfer Learning for Survival Analysis via Efficient L2,1-norm Regularized Cox Regression

被引:0
|
作者
Li, Yan [1 ]
Wang, Lu [2 ]
Wang, Jie [1 ]
Ye, Jieping [1 ,3 ]
Reddy, Chandan K. [4 ]
机构
[1] Univ Michigan, Dept Computat Med & Bioinformat, Ann Arbor, MI 48109 USA
[2] Wayne State Univ, Dept Comp Sci, Detroit, MI 48202 USA
[3] Univ Michigan, Dept Elect Engn & Comp Sci, Ann Arbor, MI 48109 USA
[4] Virginia Tech, Dept Comp Sci, Arlington, VA 22203 USA
基金
美国国家科学基金会;
关键词
Transfer learning; survival analysis; regularization; regression; high-dimensional data; CANCER CELL-LINES; ALGORITHM; PATHS;
D O I
10.1109/ICDM.2016.129
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
survival analysis, the primary goal is to monitor several entities and model the occurrence of a particular event of interest. In such applications, it is quite often the case that the event of interest may not always be observed during the study period and this gives rise to the problem of censoring which cannot be easily handled in the standard regression approaches. In addition, obtaining sufficient labeled training instances for learning a robust prediction model is a very time consuming process and can be extremely difficult in practice. In this paper, we propose a transfer learning based Cox method, called Transfer-Cox, which uses auxiliary data to augment learning when there are insufficient amount of training examples. The proposed method aims to extract "useful" knowledge from the source domain and transfer it to the target domain, thus potentially improving the prediction performance in such timeto-event data. The proposed method uses the l(2,1)-normpenalty to encourage multiple predictors to share similar sparsity patterns, thus learns a shared representation across source and target domains, potentially improving the model performance on the target task. To speedup the computation, we apply the screening approach and extend the strong rule to sparse survival analysis models in multiple high-dimensional censored datasets. We demonstrate the performance of the proposed transfer learning method using several synthetic and high-dimensional microarray gene expression benchmark datasets and compare with other related competing state-of-the-art methods. Our results show that the proposed screening approach significantly improves the computational efficiency of the proposed algorithm without compromising the prediction performance. We also demonstrate the scalability of the proposed approach and show that the time taken to obtain the results is linear with respect to both the number of instances and features.
引用
收藏
页码:231 / 240
页数:10
相关论文
共 50 条
  • [41] The l2,1-Norm Stacked Robust Autoencoders for Domain Adaptation
    Jiang, Wenhao
    Gao, Hongchang
    Chung, Fu-lai
    Huang, Heng
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1723 - 1729
  • [42] Revisiting L2,1-Norm Robustness With Vector Outlier Regularization
    Jiang, Bo
    Ding, Chris
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (12) : 5624 - 5629
  • [43] Unsupervised Discriminative Feature Selection in a Kernel Space via L2,1-Norm Minimization
    Liu, Yang
    Wang, Yizhou
    2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), 2012, : 1205 - 1208
  • [44] l2,1-norm minimization based negative label relaxation linear regression for feature selection
    Peng, Yali
    Sehdev, Paramjit
    Liu, Shigang
    Lie, Jun
    Wang, Xili
    PATTERN RECOGNITION LETTERS, 2018, 116 : 170 - 178
  • [45] Robust discriminant feature selection via joint L2,1-norm distance minimization and maximization
    Yang, Zhangjing
    Ye, Qiaolin
    Chen, Qiao
    Ma, Xu
    Fu, Liyong
    Yang, Guowei
    Yan, He
    Liu, Fan
    KNOWLEDGE-BASED SYSTEMS, 2020, 207
  • [46] Learning Robust Distance Metric with Side Information via Ratio Minimization of Orthogonally Constrained l2,1-Norm Distances
    Liu, Kai
    Brand, Lodewijk
    Wang, Hua
    Nie, Feiping
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3008 - 3014
  • [47] Discriminative Feature Selection via Joint Trace Ratio Criterion and l2,1-norm Regularization
    Jiang, Zhang
    Zhao, Mingbo
    Kong, Weijian
    2018 IEEE SYMPOSIUM ON PRODUCT COMPLIANCE ENGINEERING - ASIA 2018 (IEEE ISPCE-CN 2018), 2018, : 27 - 32
  • [48] A novel online sequential extreme learning machine with L2,1-norm regularization for prediction problems
    Preeti
    Bala, Rajni
    Dagar, Ankita
    Singh, Ram Pal
    APPLIED INTELLIGENCE, 2021, 51 (03) : 1669 - 1689
  • [49] Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm
    Xu, Jiaxuan
    Wu, Jiang
    Li, Taiyong
    Nan, Yang
    ENTROPY, 2022, 24 (10)
  • [50] A novel online sequential extreme learning machine with L2,1-norm regularization for prediction problems
    Rajni Preeti
    Ankita Bala
    Ram Pal Dagar
    Applied Intelligence, 2021, 51 : 1669 - 1689