Optimal estimation of slope vector in high-dimensional linear transformation models

被引:1
|
作者
Tan, Xin Lu [1 ]
机构
[1] Univ Penn, Wharton Sch, Dept Stat, Philadelphia, PA 19104 USA
关键词
Canonical correlation analysis; Elastic net penalty; Elliptical distribution; Kendall's tau; Optimal rate of convergence; Variables transformation; SLICED INVERSE REGRESSION; PRINCIPAL HESSIAN DIRECTIONS; SEMIPARAMETRIC ESTIMATION; VARIABLE SELECTION; RANK CORRELATION; GENERALIZED REGRESSION; MULTIPLE-REGRESSION; PARTIAL LIKELIHOOD; REDUCTION; LASSO;
D O I
10.1016/j.jmva.2018.09.001
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In a linear transformation model, there exists an unknown monotone, typically nonlinear, transformation function such that the transformed response variable is related to the predictor variables via a linear regression model. This paper presents CENet, a new method for estimating the slope vector and simultaneously performing variable selection in the high-dimensional sparse linear transformation model. CENet is the solution to a convex optimization problem which can be computed efficiently from an algorithm with guaranteed convergence to the global optimum. It is shown that when the joint distribution of the predictors and errors is elliptical, under some regularity conditions, CENet attains the same optimal rate of convergence as the best regression method in the high-dimensional sparse linear regression model. The empirical performance of CENet is shown on both simulated and real datasets. The connection of CENet with existing nonlinear regression/multivariate methods is also discussed. (C) 2018 Elsevier Inc. All rights reserved.
引用
收藏
页码:179 / 204
页数:26
相关论文
共 50 条
  • [41] Variable selection and estimation in high-dimensional models
    Horowitz, Joel L.
    CANADIAN JOURNAL OF ECONOMICS-REVUE CANADIENNE D ECONOMIQUE, 2015, 48 (02): : 389 - 407
  • [42] Variance estimation for high-dimensional regression models
    Spokoiny, V
    JOURNAL OF MULTIVARIATE ANALYSIS, 2002, 82 (01) : 111 - 133
  • [43] Dimension Reduction for High-Dimensional Vector Autoregressive Models
    Cubadda, Gianluca
    Hecq, Alain
    OXFORD BULLETIN OF ECONOMICS AND STATISTICS, 2022, 84 (05) : 1123 - 1152
  • [44] Linear Hypothesis Testing in Dense High-Dimensional Linear Models
    Zhu, Yinchu
    Bradic, Jelena
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2018, 113 (524) : 1583 - 1600
  • [45] High-dimensional autocovariance matrices and optimal linear prediction
    McMurry, Timothy L.
    Politis, Dimitris N.
    ELECTRONIC JOURNAL OF STATISTICS, 2015, 9 (01): : 753 - 788
  • [46] Estimation of variance components, heritability and the ridge penalty in high-dimensional generalized linear models
    Veerman, Jurre R.
    Leday, Gwenael G. R.
    van de Wiel, Mark A.
    COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2022, 51 (01) : 116 - 134
  • [47] Sparse Estimation Strategies in Linear Mixed Effect Models for High-Dimensional Data Application
    Opoku, Eugene A.
    Ahmed, Syed Ejaz
    Nathoo, Farouk S.
    ENTROPY, 2021, 23 (10)
  • [48] Optimal M-estimation in high-dimensional regression
    Bean, Derek
    Bickel, Peter J.
    El Karoui, Noureddine
    Yu, Bin
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2013, 110 (36) : 14563 - 14568
  • [49] OPTIMAL ESTIMATION OF HIGH-DIMENSIONAL GAUSSIAN LOCATION MIXTURES
    Doss, Natalie
    Wu, Yihong
    Yang, Pengkun
    Zhou, Harrison H.
    ANNALS OF STATISTICS, 2023, 51 (01): : 62 - 95
  • [50] M-estimation in high-dimensional linear model
    Kai Wang
    Yanling Zhu
    Journal of Inequalities and Applications, 2018