Errors-in-variables models with dependent measurements

被引:13
|
作者
Rudelson, Mark [1 ]
Zhou, Shuheng [1 ]
机构
[1] Univ Michigan, Dept Math, Dept Stat, Ann Arbor, MI 48109 USA
来源
ELECTRONIC JOURNAL OF STATISTICS | 2017年 / 11卷 / 01期
基金
美国国家科学基金会;
关键词
Errors-in-variable models; measurement error data; subgaussian concentration; matrix variate distributions; nonconvexity; MISSING DATA; SIMULATION-EXTRAPOLATION; DANTZIG SELECTOR; REGRESSION; COVARIANCE; ESTIMATORS; ALGORITHM; NONCONVEXITY; UNCERTAINTY; LIKELIHOOD;
D O I
10.1214/17-EJS1234
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Suppose that we observe y is an element of R-n and X is an element of R-nxm in the following errors- in-variables model: y = X-0 beta + is an element of X = X-0 + W where X-0 is an n x m design matrix with independent subgaussian row vectors, is an element of is an element of R-n is a noise vector and W is a mean zero n x m random noise matrix with independent subgaussian column vectors, independent of X-0 and is an element of. This model is significantly different from those analyzed in the literature in the sense that we allow the measurement error for each covariate to be a dependent vector across its n observations. Such error structures appear in the science literature when modeling the trial-to-trial fluctuations in response strength shared across a set of neurons. Under sparsity and restrictive eigenvalue type of conditions, we show that one is able to recover a sparse vector beta* is an element of R-m from the model given a single observation matrix X and the response vector y. We establish consistency in estimating beta* and obtain the rates of convergence in the l(q) norm, where q = 1, 2 for the Lasso-type estimator, and for q is an element of[1, 2] for a Dantzig-type Conic programming estimator. We show error bounds which approach that of the regular Lasso and the Dantzig selector in case the errors in W are tending to 0. We analyze the convergence rates of the gradient descent methods for solving the nonconvex programs and show that the composite gradient descent algorithm is guaranteed to converge at a geometric rate to a neighborhood of the global minimizers: the size of the neighborhood is bounded by the statistical error in the l(2) norm. Our analysis reveals interesting connections between computational and statistical efficiency and the concentration of measure phenomenon in random matrix theory. We provide simulation evidence illuminating the theoretical predictions.
引用
收藏
页码:1699 / 1797
页数:99
相关论文
共 50 条
  • [1] LINEAR ERRORS-IN-VARIABLES MODELS
    DEISTLER, M
    LECTURE NOTES IN CONTROL AND INFORMATION SCIENCES, 1986, 86 : 37 - 68
  • [2] Unidentifiability of errors-in-variables models with rank deficiency from measurements
    Xu, Peiliang
    Shi, Yun
    MEASUREMENT, 2022, 192
  • [3] Block bootstrap for dependent errors-in-variables
    Pesta, Michal
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 2017, 46 (04) : 1871 - 1897
  • [4] ASYMPTOTICS FOR WEAKLY DEPENDENT ERRORS-IN-VARIABLES
    Pesta, Michal
    KYBERNETIKA, 2013, 49 (05) : 692 - 704
  • [5] EDGEWORTH EXPANSIONS FOR ERRORS-IN-VARIABLES MODELS
    BABU, GJ
    BAI, ZD
    JOURNAL OF MULTIVARIATE ANALYSIS, 1992, 42 (02) : 226 - 244
  • [6] SPECIFICATION TESTING FOR ERRORS-IN-VARIABLES MODELS
    Otsu, Taisuke
    Taylor, Luke
    ECONOMETRIC THEORY, 2021, 37 (04) : 747 - 768
  • [7] Errors-in-variables beta regression models
    Carrasco, Jalmar M. F.
    Ferrari, Silvia L. P.
    Arellano-Valle, Reinaldo B.
    JOURNAL OF APPLIED STATISTICS, 2014, 41 (07) : 1530 - 1547
  • [8] Identification of nonlinear errors-in-variables models
    Vajk, I
    Hetthéssy, J
    AUTOMATICA, 2003, 39 (12) : 2099 - 2107
  • [9] Prediction in polynomial errors-in-variables models
    Kukush, Alexander
    Senko, Ivan
    MODERN STOCHASTICS-THEORY AND APPLICATIONS, 2020, 7 (02): : 203 - 219
  • [10] Identification of dynamic errors-in-variables models
    Castaldi, P
    Soverini, U
    AUTOMATICA, 1996, 32 (04) : 631 - 636