Estimation in the High Dimensional Additive Hazard Model with l0 Type of Penalty

被引:0
|
作者
Zhou, Yunpeng [1 ]
Yuen, Kam Chuen [1 ]
机构
[1] Univ Hong Kong, Dept Stat & Actuarial Sci, Pokfulam, Hong Kong, Peoples R China
关键词
additive hazard model; ADMM; high-dimensional analysis; survival data; variable selection; VARIABLE SELECTION; SUBSET-SELECTION; LIKELIHOOD;
D O I
10.1016/j.ecosta.2022.09.002
中图分类号
F [经济];
学科分类号
02 ;
摘要
High-dimensional data is commonly observed in survival data analysis. Penalized regression is widely applied for parameter selection given this type of data. The LASSO, SCAD and MCP methods are basic penalties developed in recent years in order to achieve more accurate selection of parameters. The l(0) penalty, which selects the best subset of parameters and provides unbiased estimation, is relatively difficult to handle due to its NP-hard complexity resulted from the non-smooth and non-convex objective function. For the additive hazard model, most methods developed so far focus on providing a smoothed version of l(0)-norm. Instead of mimicking these methods, two augmented Lagrangian based algorithms, namely the ADMM-l(0) method and the APM-l(0) method, are proposed to approximate the optimal solution generated by the l(0) penalty. The ADMM-l(0) algorithm can achieve unbiased parameter estimation, while the two-step APM-l(0) method is computationally more efficient. The convergence of ADMM-l(0) can be proved under strict assumptions. Under moderate sample sizes, both methods perform well in selecting the best subset of parameters, especially in terms of controlling the false positive rate. Finally, both methods are applied to two real datasets. (c) 2022 EcoSta Econometrics and Statistics. Published by Elsevier B.V. All rights reserved.
引用
收藏
页码:88 / 97
页数:10
相关论文
共 50 条
  • [1] Scalable network estimation with L0 penalty
    Kim, Junghi
    Zhu, Hongtu
    Wang, Xiao
    Do, Kim-Anh
    STATISTICAL ANALYSIS AND DATA MINING, 2021, 14 (01) : 18 - 30
  • [2] Model selection in high-dimensional quantile regression with seamless L0 penalty
    Ciuperca, Gabriela
    STATISTICS & PROBABILITY LETTERS, 2015, 107 : 313 - 323
  • [3] Variable selection and estimation using a continuous approximation to the L0 penalty
    Wang, Yanxin
    Fan, Qibin
    Zhu, Li
    ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS, 2018, 70 (01) : 191 - 214
  • [4] SPARSE k-MEANS WITH l∞/l0 PENALTY FOR HIGH-DIMENSIONAL DATA CLUSTERING
    Chang, Xiangyu
    Wang, Yu
    Li, Rongjian
    Xu, Zongben
    STATISTICA SINICA, 2018, 28 (03) : 1265 - 1284
  • [5] Estimation of l0 norm penalized models: A statistical treatment
    Yang, Yuan
    McMahan, Christopher S.
    Wang, Yu-Bo
    Ouyang, Yuyuan
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2024, 192
  • [6] A Continuous Exact l0 Penalty (CEL0) for Least Squares Regularized Problem
    Soubies, Emmanuel
    Blanc-Feraud, Laure
    Aubert, Gilles
    SIAM JOURNAL ON IMAGING SCIENCES, 2015, 8 (03): : 1607 - 1639
  • [7] Group Sparse Recovery via the l0(l2) Penalty: Theory and Algorithm
    Jiao, Yuling
    Jin, Bangti
    Lu, Xiliang
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2017, 65 (04) : 998 - 1012
  • [8] L0-Regularized Learning for High-Dimensional Additive Hazards Regression
    Zheng, Zemin
    Zhang, Jie
    Li, Yang
    INFORMS JOURNAL ON COMPUTING, 2022, 34 (05) : 2762 - 2775
  • [9] l0 Sparse Inverse Covariance Estimation
    Marjanovic, Goran
    Hero, Alfred O., III
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (12) : 3218 - 3231
  • [10] A neutral comparison of algorithms to minimize L0 penalties for high-dimensional variable selection
    Frommlet, Florian
    BIOMETRICAL JOURNAL, 2024, 66 (01)