A Unifying Framework of High-Dimensional Sparse Estimation with Difference-of-Convex (DC) Regularizations

被引:0
|
作者
Cao, Shanshan [1 ]
Huo, Xiaoming [1 ]
Pang, Jong-Shi [2 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Univ Southern Calif, Los Angeles, CA 90089 USA
关键词
(Generalized) linear regression; high-dimensional sparse estimation; nonconvex regularization; difference of convex (DC) functions; DC algorithms; asymptotic optimality; model selection consistency; NONCONCAVE PENALIZED LIKELIHOOD; CONFIDENCE-INTERVALS; VARIABLE SELECTION; MODEL SELECTION; ADAPTIVE LASSO; REGRESSION; OPTIMIZATION; CONVERGENCE; RELAXATION; OPTIMALITY;
D O I
10.1214/21-STS832
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Under the linear regression framework, we study the variable selection problem when the underlying model is assumed to have a small number of nonzero coefficients. Nonconvex penalties in specific forms are well studied in the literature for sparse estimation. Recent work pointed out that nearly all existing nonconvex penalties can be represented as differenceof-convex (DC) functions, which are the difference of two convex functions, while itself may not be convex. There is a large existing literature on optimization problems when their objectives and/or constraints involve DC functions. Efficient numerical solutions have been proposed. Under the DC framework, directional-stationary (d-stationary) solutions are considered, and they are usually not unique. In this paper, we show that under some mild conditions, a certain subset of d-stationary solutions in an optimization problem (with a DC objective) has some ideal statistical properties: namely, asymptotic estimation consistency, asymptotic model selection consistency, asymptotic efficiency. Our assumptions are either weaker than or comparable with those conditions that have been adopted in other existing works. This work shows that DC is a nice framework to offer a unified approach to these existing works where nonconvex penalties are involved. Our work bridges the communities of optimization and statistics.
引用
收藏
页码:411 / 424
页数:14
相关论文
共 50 条
  • [21] High-dimensional sparse MANOVA
    Cai, T. Tony
    Xia, Yin
    JOURNAL OF MULTIVARIATE ANALYSIS, 2014, 131 : 174 - 196
  • [22] XDL: An Industrial Deep Learning Framework for High-dimensional Sparse Data
    Jiang, Biye
    Deng, Chao
    Yi, Huimin
    Hu, Zelin
    Zhou, Guorui
    Zheng, Yang
    Huang, Sui
    Guo, Xinyang
    Wang, Dongyue
    Song, Yue
    Zhao, Liqin
    Wang, Zhi
    Sun, Peng
    Zhang, Yu
    Zhang, Di
    Li, Jinhui
    Xu, Jian
    Zhu, Xiaoqiang
    Gai, Kun
    1ST INTERNATIONAL WORKSHOP ON DEEP LEARNING PRACTICE FOR HIGH-DIMENSIONAL SPARSE DATA WITH KDD (DLP-KDD 2019), 2019,
  • [23] A High-Dimensional Sparse Hashing Framework for Cross-Modal Retrieval
    Wang, Yongxin
    Chen, Zhen-Duo
    Luo, Xin
    Xu, Xin-Shun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (12) : 8822 - 8836
  • [24] DIFFERENCE-OF-CONVEX ALGORITHMS FOR A CLASS OF SPARSE GROUP l0 REGULARIZED OPTIMIZATION PROBLEMS
    Li, Wenjing
    Bian, Wei
    Toh, Kim-Chuan
    SIAM JOURNAL ON OPTIMIZATION, 2022, 32 (03) : 1614 - 1641
  • [25] Lower bound estimation for a family of high-dimensional sparse covariance matrices
    Li, Huimin
    Liu, Youming
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2024, 22 (02)
  • [26] Autonomous Estimation of High-Dimensional Coulomb Diamonds from Sparse Measurements
    Chatterjee, Anasua
    Ansaloni, Fabio
    Rasmussen, Torbjorn
    Brovang, Bertram
    Fedele, Federico
    Bohuslavskyi, Heorhii
    Krause, Oswin
    Kuemmeth, Ferdinand
    PHYSICAL REVIEW APPLIED, 2022, 18 (06)
  • [27] Robust sparse precision matrix estimation for high-dimensional compositional data
    Liang, Wanfeng
    Wu, Yue
    Ma, Xiaoyan
    STATISTICS & PROBABILITY LETTERS, 2022, 184
  • [28] Robust and sparse estimation methods for high-dimensional linear and logistic regression
    Kurnaz, Fatma Sevinc
    Hoffmann, Irene
    Filzmoser, Peter
    CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS, 2018, 172 : 211 - 222
  • [29] Optimal estimation of high-dimensional sparse covariance matrices with missing data
    Miao, Li
    Wang, Jinru
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 2024,
  • [30] Estimation of high-dimensional vector autoregression via sparse precision matrix
    Poignard, Benjamin
    Asai, Manabu
    ECONOMETRICS JOURNAL, 2023, 26 (02): : 307 - 326