Sparse Convoluted Rank Regression in High Dimensions

被引:8
作者
Zhou, Le [1 ]
Wang, Boxiang [2 ]
Zou, Hui [3 ]
机构
[1] Hong Kong Baptist Univ, Dept Math, Kowloon Tong, Hong Kong, Peoples R China
[2] Univ Iowa, Dept Stat & Actuarial Sci, Iowa City, IA USA
[3] Univ Minnesota, Sch Stat, Minneapolis, MN 55455 USA
关键词
Convolution; Efficiency; High dimensions; Information criterion; Rank regression; NONCONCAVE PENALIZED LIKELIHOOD; QUANTILE REGRESSION; VARIABLE SELECTION;
D O I
10.1080/01621459.2023.2202433
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Wang et al. studied the high-dimensional sparse penalized rank regression and established its nice theoretical properties. Compared with the least squares, rank regression can have a substantial gain in estimation efficiency while maintaining a minimal relative efficiency of 86.4%. However, the computation of penalized rank regression can be very challenging for high-dimensional data, due to the highly nonsmooth rank regression loss. In this work we view the rank regression loss as a nonsmooth empirical counterpart of a population level quantity, and a smooth empirical counterpart is derived by substituting a kernel density estimator for the true distribution in the expectation calculation. This view leads to the convoluted rank regression loss and consequently the sparse penalized convoluted rank regression (CRR) for high-dimensional data. We prove some interesting asymptotic properties of CRR. Under the same key assumptions for sparse rank regression, we establish the rate of convergence of the l(1)-penalized CRR for a tuning free penalization parameter and prove the strong oracle property of the folded concave penalized CRR. We further propose a high-dimensional Bayesian information criterion for selecting the penalization parameter in folded concave penalized CRR and prove its selection consistency. We derive an efficient algorithm for solving sparse convoluted rank regression that scales well with high dimensions. Numerical examples demonstrate the promising performance of the sparse convoluted rank regression over the sparse rank regression. Our theoretical and numerical results suggest that sparse convoluted rank regression enjoys the best of both sparse least squares regression and sparse rank regression. for this article are available online.
引用
收藏
页码:1500 / 1512
页数:13
相关论文
共 50 条
  • [41] MINIMAX SPARSE PRINCIPAL SUBSPACE ESTIMATION IN HIGH DIMENSIONS
    Vu, Vincent Q.
    Lei, Jing
    ANNALS OF STATISTICS, 2013, 41 (06) : 2905 - 2947
  • [42] Selection of sparse vine copulas in high dimensions with the Lasso
    Mueller, Dominik
    Czado, Claudia
    STATISTICS AND COMPUTING, 2019, 29 (02) : 269 - 287
  • [43] Robust and sparse bridge regression
    Li, Bin
    Yu, Qingzhao
    STATISTICS AND ITS INTERFACE, 2009, 2 (04) : 481 - 491
  • [44] Envelope-based sparse reduced-rank regression for multivariate linear model
    Guo, Wenxing
    Balakrishnan, Narayanaswamy
    He, Mu
    JOURNAL OF MULTIVARIATE ANALYSIS, 2023, 195
  • [45] NEARLY OPTIMAL MINIMAX ESTIMATOR FOR HIGH-DIMENSIONAL SPARSE LINEAR REGRESSION
    Zhang, Li
    ANNALS OF STATISTICS, 2013, 41 (04) : 2149 - 2175
  • [46] A Bayesian Approach to Sparse Cox Regression in High-Dimentional Survival Analysis
    Krasotkina, Olga
    Mottl, Vadim
    MACHINE LEARNING AND DATA MINING IN PATTERN RECOGNITION, MLDM 2015, 2015, 9166 : 425 - 437
  • [47] High-dimensional sparse vine copula regression with application to genomic prediction
    Sahin, Oezge
    Czado, Claudia
    BIOMETRICS, 2024, 80 (01)
  • [48] Distributed Quantile Regression with Non-Convex Sparse Penalties
    Mirzaeifard, Reza
    Gogineni, Vinay Chakravarthi
    Venkategowda, Naveen K. D.
    Werner, Stefan
    2023 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP, SSP, 2023, : 250 - 254
  • [49] Variable Selection for Distributed Sparse Regression Under Memory Constraints
    Wang, Haofeng
    Jiang, Xuejun
    Zhou, Min
    Jiang, Jiancheng
    COMMUNICATIONS IN MATHEMATICS AND STATISTICS, 2024, 12 (02) : 307 - 338
  • [50] Rank-based Liu regression
    Arashi, Mohammad
    Norouzirad, Mina
    Ahmed, S. Ejaz
    Yuzbasi, Bahadir
    COMPUTATIONAL STATISTICS, 2018, 33 (03) : 1525 - 1561