Optimal estimation of slope vector in high-dimensional linear transformation models

被引:1
|
作者
Tan, Xin Lu [1 ]
机构
[1] Univ Penn, Wharton Sch, Dept Stat, Philadelphia, PA 19104 USA
关键词
Canonical correlation analysis; Elastic net penalty; Elliptical distribution; Kendall's tau; Optimal rate of convergence; Variables transformation; SLICED INVERSE REGRESSION; PRINCIPAL HESSIAN DIRECTIONS; SEMIPARAMETRIC ESTIMATION; VARIABLE SELECTION; RANK CORRELATION; GENERALIZED REGRESSION; MULTIPLE-REGRESSION; PARTIAL LIKELIHOOD; REDUCTION; LASSO;
D O I
10.1016/j.jmva.2018.09.001
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In a linear transformation model, there exists an unknown monotone, typically nonlinear, transformation function such that the transformed response variable is related to the predictor variables via a linear regression model. This paper presents CENet, a new method for estimating the slope vector and simultaneously performing variable selection in the high-dimensional sparse linear transformation model. CENet is the solution to a convex optimization problem which can be computed efficiently from an algorithm with guaranteed convergence to the global optimum. It is shown that when the joint distribution of the predictors and errors is elliptical, under some regularity conditions, CENet attains the same optimal rate of convergence as the best regression method in the high-dimensional sparse linear regression model. The empirical performance of CENet is shown on both simulated and real datasets. The connection of CENet with existing nonlinear regression/multivariate methods is also discussed. (C) 2018 Elsevier Inc. All rights reserved.
引用
收藏
页码:179 / 204
页数:26
相关论文
共 50 条
  • [21] Semiparametric efficient estimation in high-dimensional partial linear regression models
    Fu, Xinyu
    Huang, Mian
    Yao, Weixin
    SCANDINAVIAN JOURNAL OF STATISTICS, 2024, 51 (03) : 1259 - 1287
  • [22] Adaptive group bridge estimation for high-dimensional partially linear models
    Xiuli Wang
    Mingqiu Wang
    Journal of Inequalities and Applications, 2017
  • [23] Boosting for high-dimensional linear models
    Buhlmann, Peter
    ANNALS OF STATISTICS, 2006, 34 (02): : 559 - 583
  • [24] Optimal Poisson subsampling decorrelated score for high-dimensional generalized linear models
    Shan, Junhao
    Wang, Lei
    JOURNAL OF APPLIED STATISTICS, 2024, 51 (14) : 2719 - 2743
  • [25] Optimal equivariant prediction for high-dimensional linear models with arbitrary predictor covariance
    Dicker, Lee H.
    ELECTRONIC JOURNAL OF STATISTICS, 2013, 7 : 1806 - 1834
  • [26] Linear shrinkage estimation of high-dimensional means
    Ikeda, Yuki
    Nakada, Ryumei
    Kubokawa, Tatsuya
    Srivastava, Muni S.
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 2023, 52 (13) : 4444 - 4460
  • [27] Variable selection in multivariate linear models with high-dimensional covariance matrix estimation
    Perrot-Dockes, Marie
    Levy-Leduc, Celine
    Sansonnet, Laure
    Chiquet, Julien
    JOURNAL OF MULTIVARIATE ANALYSIS, 2018, 166 : 78 - 97
  • [28] Efficient adaptive estimation strategies in high-dimensional partially linear regression models
    Gao, Xiaoli
    Ahmed, S. Ejaz
    PERSPECTIVES ON BIG DATA ANALYSIS: METHODOLOGIES AND APPLICATIONS, 2014, 622 : 61 - 80
  • [29] Noise covariance estimation in multi-task high-dimensional linear models
    Tan, Kai
    Romon, Gabriel
    Bellec, Pierre C.
    BERNOULLI, 2024, 30 (03) : 1695 - 1722
  • [30] Optimal shrinkage estimator for high-dimensional mean vector
    Bodnar, Taras
    Okhrin, Ostap
    Parolya, Nestor
    JOURNAL OF MULTIVARIATE ANALYSIS, 2019, 170 : 63 - 79