On Sure Early Selection of the Best Subset

被引:0
|
作者
Zhu, Ziwei [1 ]
Wu, Shihao [2 ]
机构
[1] Univ Michigan, Dept Stat, Ann Arbor, MI 60654 USA
[2] Univ Michigan, Dept Stat, Ann Arbor, MI 48105 USA
关键词
Sure early selection; best subset selection; false discovery rate; solution path; sure screening; NONCONCAVE PENALIZED LIKELIHOOD; FALSE DISCOVERY RATE; VARIABLE SELECTION; LASSO; RECOVERY; DIMENSION;
D O I
10.1109/TIT.2024.3415653
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The early solution path, which tracks the first few variables that enter the model of a selection procedure, is of profound importance to scientific discoveries. In practice, it is often statistically hopeless to identify all the important features with no false discovery, let alone the intimidating expense of experiments to test their significance. Such realistic limitation calls for statistical guarantee for the early discoveries of a model selector. In this paper, we focus on the early solution path of best subset selection (BSS), where the sparsity constraint is set to be lower than the true sparsity. Under a sparse high-dimensional linear model, we establish the sufficient and (near) necessary condition for BSS to achieve sure early selection, or equivalently, zero false discovery throughout its early path. Essentially, this condition boils down to a lower bound of the minimum projected signal margin that characterizes the gap of the captured signal strength between sure selection models and those with spurious discoveries. Defined through projection operators, this margin is insensitive to the restricted eigenvalues of the design, suggesting the robustness of BSS against collinearity. Moreover, our model selection guarantee tolerates reasonable optimization error and thus applies to near best subsets. Finally, to overcome the computational hurdle of BSS under high dimension, we propose the "screen then select" (STS) strategy to reduce dimension for BSS. Our numerical experiments show that the resulting early path exhibits much lower false discovery rate (FDR) than LASSO, MCP and SCAD, especially in the presence of highly correlated design. We also investigate the early paths of the iterative hard thresholding algorithms, which are greedy computational surrogates for BSS, and which yield comparable FDR as our STS procedure.
引用
收藏
页码:8870 / 8891
页数:22
相关论文
共 50 条
  • [31] Best-subset model selection based on multitudinal assessments of likelihood improvements
    Carter, Knute D.
    Cavanaugh, Joseph E.
    JOURNAL OF APPLIED STATISTICS, 2020, 47 (13-15) : 2384 - 2420
  • [32] Best Subset Selection Based Rice Panicle Segmentation from UAV Image
    Cao Y.
    Liu Y.
    Ma D.
    Li A.
    Xu T.
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2020, 51 (08): : 171 - 177and188
  • [33] BAYES-P-STAR SUBSET-SELECTION PROCEDURES FOR THE BEST POPULATION
    GUPTA, SS
    YANG, HM
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 1985, 12 (02) : 213 - 233
  • [34] BEST SUBSET SEARCH
    GARSIDE, MJ
    THE ROYAL STATISTICAL SOCIETY SERIES C-APPLIED STATISTICS, 1971, 20 (01): : 112 - &
  • [35] Relay Subset Selection and Fair Power Allocation for Best and Partial Relay Selection in Generic Noise and Interference
    Ahmed, Imtiaz
    Nasri, Amir
    Michalopoulos, Diomidis S.
    Schober, Robert
    Mallik, Ranjan K.
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2012, 11 (05) : 1828 - 1839
  • [36] SUBSET-SELECTION TOWARD OPTIMIZING THE BEST PERFORMANCE AT A 2ND STAGE
    EHRMAN, CM
    KRIEGER, A
    MIESCKE, KJ
    JOURNAL OF BUSINESS & ECONOMIC STATISTICS, 1987, 5 (02) : 295 - 303
  • [37] A combination multiple comparisons and subset selection procedure to identify treatments that are strictly inferior to the best
    Hayter, A. J.
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2007, 137 (07) : 2115 - 2126
  • [38] Massively-Parallel Best Subset Selection for Ordinary Least-Squares Regression
    Gieseke, Fabian
    Polsterer, Kai Lars
    Mahabal, Ashish
    Igel, Christian
    Heskes, Tom
    2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017,
  • [39] A multistage algorithm for best-subset model selection based on the Kullback–Leibler discrepancy
    Tao Zhang
    Joseph E. Cavanaugh
    Computational Statistics, 2016, 31 : 643 - 669
  • [40] M-best Subset Selection from N Alternatives Based on Genetic Algorithm
    Zhang, Ping
    Jiang, Ju
    Han, Xueshan
    Lin, Zhuoxun
    2011 24TH CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (CCECE), 2011, : 621 - 624