Sparse learning with concave regularization: relaxation of the irrepresentable condition

被引:0
|
作者
Cerone, V [1 ]
Fosson, S. M. [1 ]
Regruto, D. [1 ]
Salam, A. [1 ]
机构
[1] Politecn Torino, Dipartimento Automat & Informat, Corso Duca Abruzzi 24, I-10129 Turin, Italy
来源
2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC) | 2020年
关键词
VARIABLE SELECTION;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Learning sparse models from data is an important task in all those frameworks where relevant information should be identified within a large dataset. This can be achieved by formulating and solving suitable sparsity promoting optimization problems. As to linear regression models, Lasso is the most popular convex approach, based on an '1-norm regularization. In contrast, in this paper, we analyse a concave regularized approach, and we prove that it relaxes the irrepresentable condition, which is sufficient and essentially necessary for Lasso to select the right significant parameters. In practice, this has the benefit of reducing the number of necessary measurements with respect to Lasso. Since the proposed problem is non-convex, we also discuss different algorithms to solve it, and we illustrate the obtained enhancement via numerical experiments.
引用
收藏
页码:396 / 401
页数:6
相关论文
共 50 条
  • [41] Sparse trace norm regularization
    Jianhui Chen
    Jieping Ye
    Computational Statistics, 2014, 29 : 623 - 639
  • [42] Sparse regularization via bidualization
    Beck, Amir
    Refael, Yehonathan
    JOURNAL OF GLOBAL OPTIMIZATION, 2022, 82 (03) : 463 - 482
  • [43] SUPERMIX: SPARSE REGULARIZATION FOR MIXTURES
    De Castro, Y.
    Gadat, S.
    Marteau, C.
    Maugis-Rabusseau, C.
    ANNALS OF STATISTICS, 2021, 49 (03): : 1779 - 1809
  • [44] SPARSE REPRESENTATION LEARNING OF DATA BY AUTOENCODERS WITH L1/2 REGULARIZATION
    Li, F.
    Zurada, J. M.
    Wu, W.
    NEURAL NETWORK WORLD, 2018, 28 (02) : 133 - 147
  • [45] Transformed l1 regularization for learning sparse deep neural networks
    Ma, Rongrong
    Miao, Jianyu
    Niu, Lingfeng
    Zhang, Peng
    NEURAL NETWORKS, 2019, 119 : 286 - 298
  • [46] Manifold constrained joint sparse learning via non-convex regularization
    Liu, Jingjing
    Xiu, Xianchao
    Jiang, Xin
    Liu, Wanquan
    Zeng, Xiaoyang
    Wang, Mingyu
    Chen, Hui
    NEUROCOMPUTING, 2021, 458 : 112 - 126
  • [47] Learning Sparse Convolutional Neural Network via Quantization with Low Rank Regularization
    Long, Xin
    Ben, Zongcheng
    Zeng, Xiangrong
    Liu, Yan
    Zhang, Maojun
    Zhou, Dianle
    IEEE ACCESS, 2019, 7 : 51866 - 51876
  • [48] SPARSE REGULARIZATION OF TENSOR DECOMPOSITIONS
    Kim, Hyon-Jung
    Ollila, Esa
    Koivunen, Visa
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 3836 - 3840
  • [49] Sparse trace norm regularization
    Chen, Jianhui
    Ye, Jieping
    COMPUTATIONAL STATISTICS, 2014, 29 (3-4) : 623 - 639
  • [50] THE GEOMETRY OF SPARSE ANALYSIS REGULARIZATION
    Dupuis, Xavier
    Vaiter, Samuel
    SIAM JOURNAL ON OPTIMIZATION, 2023, 33 (02) : 842 - 867