Penalized regression via the restricted bridge estimator

被引:8
作者
Yuzbasi, Bahadir [1 ]
Arashi, Mohammad [2 ]
Akdeniz, Fikri [3 ]
机构
[1] Inonu Univ, Dept Econometr, Malatya, Turkey
[2] Ferdowsi Univ Mashhad, Fac Math Sci, Dept Stat, Mashhad, Razavi Khorasan, Iran
[3] Cag Univ, Dept Math & Comp Sci, Mersin, Turkey
关键词
Bridge regression; Restricted estimation; Machine learning; Quadratic approximation; Newton-Raphson; Variable selection; Multicollinearity; LINEAR-REGRESSION; VARIABLE SELECTION; RIDGE; LASSO; LIKELIHOOD; SHRINKAGE; ROBUST;
D O I
10.1007/s00500-021-05763-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This article is concerned with the bridge regression, which is a special family in penalized regression with penalty function Sigma(p)(j=1) |beta(j) (q) with q > 0, in a linear model with linear restrictions. The proposed restricted bridge (RBRIDGE) estimator simultaneously estimates parameters and selects important variables when a piece of prior information about parameters are available in either low-dimensional or high-dimensional case. Using local quadratic approximation, we approximate the penalty term around a local initial values vector. The RBRIDGE estimator enjoys a closed-form expression that can be solved when q > 0. Special cases of our proposal are the restricted LASSO (q = 1), restricted RIDGE (q = 2), and restricted Elastic Net (1 < q < 2) estimators. We provide some theoretical properties of the RBRIDGE estimator for the low-dimensional case, whereas the computational aspects are given for both low- and high-dimensional cases. An extensive Monte Carlo simulation study is conducted based on different prior pieces of information. The performance of the RBRIDGE estimator is compared with some competitive penalty estimators and the ORACLE. We also consider four real-data examples analysis for comparison sake. The numerical results show that the suggested RBRIDGE estimator outperforms outstandingly when the prior is true or near exact.
引用
收藏
页码:8401 / 8416
页数:16
相关论文
共 39 条
[1]   Big data analytics: integrating penalty strategies [J].
Ahmed, S. Ejaz ;
Yuzbasi, Bahadir .
INTERNATIONAL JOURNAL OF MANAGEMENT SCIENCE AND ENGINEERING MANAGEMENT, 2016, 11 (02) :105-115
[2]  
Ahmed SE., 2014, Penalty, Shrinkage, and Pretest Strategies: Variable Selection and Estimation, DOI DOI 10.1007/978-3-319-03149-1
[3]  
Akdeniz F, RBRIDGE RESTRICTED B
[4]   The Generalized Lasso Problem and Uniqueness [J].
Ali, Alnur ;
Tibshirani, Ryan J. .
ELECTRONIC JOURNAL OF STATISTICS, 2019, 13 (02) :2307-2347
[5]  
Arslan O, ARXIV PREPRINT ARXIV
[6]  
Cule E, RIDGE RIDGE REGRESSI
[7]   RESTRICTIONS ON VARIABLES [J].
DON, FJH .
JOURNAL OF ECONOMETRICS, 1982, 18 (03) :369-393
[8]   RcppArmadillo: Accelerating R with high-performance C plus plus linear algebra [J].
Eddelbuettel, Dirk ;
Sanderson, Conrad .
COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2014, 71 :1054-1063
[9]   Variable selection via nonconcave penalized likelihood and its oracle properties [J].
Fan, JQ ;
Li, RZ .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2001, 96 (456) :1348-1360
[10]   A STATISTICAL VIEW OF SOME CHEMOMETRICS REGRESSION TOOLS [J].
FRANK, IE ;
FRIEDMAN, JH .
TECHNOMETRICS, 1993, 35 (02) :109-135