Variables selection using L0 penalty

被引:0
|
作者
Zhang, Tonglin [1 ]
机构
[1] Purdue Univ, Dept Stat, 150 North Univ St, W Lafayette, IN 47907 USA
关键词
Consistency; Generalized information criterion; Generalized linear models; High -dimensional data; Model size; Penalized maximum likelihood; CENTRAL LIMIT-THEOREMS; TUNING PARAMETER SELECTION; REGRESSION; REGULARIZATION; SUBSET; MODELS; LASSO;
D O I
10.1016/j.csda.2023.107860
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The determination of a tuning parameter by the generalized information criterion (GIC) is considered an important issue in variable selection. It is shown that the GIC and the G0 penalized objective functions are equivalent, leading to a new G0 penalized maximum likelihood method for high-dimensional generalized linear models in this article. Based on the technique of the well-known discrete optimization problem in theoretical computer science, a two-step algorithm for local solutions is proposed. The first step optimizes the G0 penalized objective function under a given model size, where only a maximum likelihood algorithm is needed. The second step optimizes the G0 penalized objective function under a candidate set of model sizes, where only the GIC is needed. As the tuning parameter can be fixed, the selection of the tuning parameter can be ignored in the proposed method. The theoretical study shows that the algorithm is polynomial and any resulting local solution is consistent. Thus, it is not necessary to use the global solution in practice. The numerical studies show that the proposed method outperforms its competitors in general.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Sparse Inverse Covariance Estimation with L0 Penalty for Network Construction with Omics Data
    Liu, Zhenqiu
    Lin, Shili
    Deng, Nan
    McGovern, Dermot P. B.
    Piantadosi, Steven
    JOURNAL OF COMPUTATIONAL BIOLOGY, 2016, 23 (03) : 192 - 202
  • [22] Penalty decomposition method for solving l0 regularized problems: application to trend filtering
    Patrascu, Andrei
    Necoara, Ion
    2014 18TH INTERNATIONAL CONFERENCE SYSTEM THEORY, CONTROL AND COMPUTING (ICSTCC), 2014, : 737 - 742
  • [23] Variable selection via a combination of the L0 and L1 penalties
    Liu, Yufeng
    Wu, Yichao
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2007, 16 (04) : 782 - 798
  • [24] SPARSE k-MEANS WITH l∞/l0 PENALTY FOR HIGH-DIMENSIONAL DATA CLUSTERING
    Chang, Xiangyu
    Wang, Yu
    Li, Rongjian
    Xu, Zongben
    STATISTICA SINICA, 2018, 28 (03) : 1265 - 1284
  • [25] Sparse Unmixing using an approximate L0 Regularization
    Guo, Yang
    Gao, Tai
    Deng, Chengzhi
    Wang, Shengqian
    Xiao, JianPing
    PROCEEDINGS OF THE FIRST INTERNATIONAL CONFERENCE ON INFORMATION SCIENCES, MACHINERY, MATERIALS AND ENERGY (ICISMME 2015), 2015, 126 : 900 - 904
  • [26] VARIABLE SELECTION AND ESTIMATION WITH THE SEAMLESS-L0 PENALTY
    Dicker, Lee
    Huang, Baosheng
    Lin, Xihong
    STATISTICA SINICA, 2013, 23 (02) : 929 - 962
  • [27] L0 Gradient Projection
    Ono, Shunsuke
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (04) : 1554 - 1564
  • [28] l0 Trend Filtering
    Wen, Canhong
    Wang, Xueqin
    Zhang, Aijun
    INFORMS JOURNAL ON COMPUTING, 2023, 35 (06) : 1491 - 1510
  • [29] Average performance of the approximation in a dictionary using an l0 objective
    Malgouyres, Francois
    Nikolova, Mila
    COMPTES RENDUS MATHEMATIQUE, 2009, 347 (9-10) : 565 - 570
  • [30] Solve exactly an under determined linear system by minimizing least squares regularized with an l0 penalty
    Nikolova, Mila
    COMPTES RENDUS MATHEMATIQUE, 2011, 349 (21-22) : 1145 - 1150