Optimal estimation of slope vector in high-dimensional linear transformation models

被引:1
|
作者
Tan, Xin Lu [1 ]
机构
[1] Univ Penn, Wharton Sch, Dept Stat, Philadelphia, PA 19104 USA
关键词
Canonical correlation analysis; Elastic net penalty; Elliptical distribution; Kendall's tau; Optimal rate of convergence; Variables transformation; SLICED INVERSE REGRESSION; PRINCIPAL HESSIAN DIRECTIONS; SEMIPARAMETRIC ESTIMATION; VARIABLE SELECTION; RANK CORRELATION; GENERALIZED REGRESSION; MULTIPLE-REGRESSION; PARTIAL LIKELIHOOD; REDUCTION; LASSO;
D O I
10.1016/j.jmva.2018.09.001
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In a linear transformation model, there exists an unknown monotone, typically nonlinear, transformation function such that the transformed response variable is related to the predictor variables via a linear regression model. This paper presents CENet, a new method for estimating the slope vector and simultaneously performing variable selection in the high-dimensional sparse linear transformation model. CENet is the solution to a convex optimization problem which can be computed efficiently from an algorithm with guaranteed convergence to the global optimum. It is shown that when the joint distribution of the predictors and errors is elliptical, under some regularity conditions, CENet attains the same optimal rate of convergence as the best regression method in the high-dimensional sparse linear regression model. The empirical performance of CENet is shown on both simulated and real datasets. The connection of CENet with existing nonlinear regression/multivariate methods is also discussed. (C) 2018 Elsevier Inc. All rights reserved.
引用
收藏
页码:179 / 204
页数:26
相关论文
共 50 条
  • [31] Stable prediction in high-dimensional linear models
    Lin, Bingqing
    Wang, Qihua
    Zhang, Jun
    Pang, Zhen
    STATISTICS AND COMPUTING, 2017, 27 (05) : 1401 - 1412
  • [32] Stable prediction in high-dimensional linear models
    Bingqing Lin
    Qihua Wang
    Jun Zhang
    Zhen Pang
    Statistics and Computing, 2017, 27 : 1401 - 1412
  • [33] High-dimensional generalized linear models and the lasso
    van de Geer, Sara A.
    ANNALS OF STATISTICS, 2008, 36 (02): : 614 - 645
  • [34] Simultaneous Inference for High-Dimensional Linear Models
    Zhang, Xianyang
    Cheng, Guang
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2017, 112 (518) : 757 - 768
  • [35] Sparse transition matrix estimation for high-dimensional and locally stationary vector autoregressive models
    Ding, Xin
    Qiu, Ziyi
    Chen, Xiaohui
    ELECTRONIC JOURNAL OF STATISTICS, 2017, 11 (02): : 3871 - 3902
  • [36] A BAYESIAN FRAMEWORK FOR SPARSE ESTIMATION IN HIGH-DIMENSIONAL MIXED FREQUENCY VECTOR AUTOREGRESSIVE MODELS
    Chakraborty, Nilanjana
    Khare, Kshitij
    Michailidis, George
    STATISTICA SINICA, 2023, 33 : 1629 - 1652
  • [37] Regularized estimation of high-dimensional factor-augmented vector autoregressive (favar) models
    Lin, Jiahe
    Michailidis, George
    1600, Microtome Publishing (21):
  • [38] Regularized Estimation of High-dimensional Factor-Augmented Vector Autoregressive (FAVAR) Models
    Lin, Jiahe
    Michailidis, George
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [39] Statistical significance in high-dimensional linear models
    Buehlmann, Peter
    BERNOULLI, 2013, 19 (04) : 1212 - 1242
  • [40] High-dimensional inference in misspecified linear models
    Buehlmann, Peter
    van de Geer, Sara
    ELECTRONIC JOURNAL OF STATISTICS, 2015, 9 (01): : 1449 - 1473