AN ADAPTIVE l1-l2-TYPE MODEL WITH HIERARCHIES FOR SPARSE SIGNAL RECONSTRUCTION PROBLEM

被引:0
|
作者
Ding, Yanyun [1 ]
Yue, Zhixiao [2 ]
Zhang, Haibin [1 ]
机构
[1] Beijing Univ Technol, Dept Operat Res & Informat Engn, Beijing 100124, Peoples R China
[2] Southern Univ Sci & Technol, Dept Math, Shenzhen 518055, Peoples R China
来源
PACIFIC JOURNAL OF OPTIMIZATION | 2022年 / 18卷 / 04期
关键词
convex optimization; sparse signal reconstruction; hierarchical optimization; l(p-) l(1 )-l(2)minimization; alternating direction method of multipliers; MINIMIZATION; REGRESSION; MULTIPLIERS; EFFICIENT; ALGORITHM; SELECTION;
D O I
暂无
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
This paper addresses solving an adaptive l(1)-l(2) regularized model in the framework of hierarchical convex optimization for sparse signal reconstruction. This is realized in the framework of bi-level convex optimization, we can also turn the challenging bi-level model into a single-level constrained optimization problem through some priori information. The l(1)-l(2 )norm regularized least-square sparse optimization is also called the elastic net problem, and numerous simulation and real-world data show that the elastic net often outperforms the Lasso. However, the elastic net is suitable for handling Gaussian noise in most cases. In this paper, we propose an adaptive and robust model for reconstructing sparse signals, say l(p-)l(1)-l(2), where the l(p)-norm with p >= 1 measures the data fidelity and l(1)-l(2)-term measures the sparsity. This model is robust and flexible in the sense of having the ability to deal with different types of noises. To solve this model, we employ an alternating direction method of multipliers (ADMM) based on introducing one or a pair of auxiliary variables. From the point of view of numerical computation, we use numerical experiments to demonstrate that both of our proposed model and algorithms outperform the Lasso model solved by ADMM on sparse signal reconstruction problem.
引用
收藏
页码:695 / 712
页数:18
相关论文
共 50 条
  • [1] MINIMIZATION OF L1 OVER L2 FOR SPARSE SIGNAL RECOVERY WITH CONVERGENCE GUARANTEE
    Tao, M. I. N.
    SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2022, 44 (02) : A770 - A797
  • [2] Reconstruction analysis of block-sparse signal via truncated l2/l1-minimisation with redundant dictionaries
    Liu, Jiayi
    Wang, Jianjun
    Zhang, Feng
    IET SIGNAL PROCESSING, 2018, 12 (08) : 1034 - 1042
  • [3] Centralized and Collective Neurodynamic Optimization Approaches for Sparse Signal Reconstruction via L1Y-Minimization
    Zhao, You
    Liao, Xiaofeng
    He, Xing
    Tang, Rongqiang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (12) : 7488 - 7501
  • [4] An inertial projection neural network for sparse signal reconstruction via l1-2 minimization
    Zhu, Lijuan
    Wang, Jianjun
    He, Xing
    Zhao, You
    NEUROCOMPUTING, 2018, 315 : 89 - 95
  • [5] Sorted L1/L2 Minimization for Sparse Signal Recovery
    Wang, Chao
    Yan, Ming
    Yu, Junjie
    JOURNAL OF SCIENTIFIC COMPUTING, 2024, 99 (02)
  • [6] Sparse Coding Algorithm with Negentropy and Weighted l1-Norm for Signal Reconstruction
    Zhao, Yingxin
    Liu, Zhiyang
    Wang, Yuanyuan
    Wu, Hong
    Ding, Shuxue
    ENTROPY, 2017, 19 (11)
  • [7] Sparse portfolio optimization via l1 over l2 regularization
    Wu, Zhongming
    Sun, Kexin
    Ge, Zhili
    Allen-Zhao, Zhihua
    Zeng, Tieyong
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2024, 319 (03) : 820 - 833
  • [8] l1-Sparse Reconstruction of Sharp Point Set Surfaces
    Avron, Haim
    Sharf, Andrei
    Greif, Chen
    Cohen-Or, Daniel
    ACM TRANSACTIONS ON GRAPHICS, 2010, 29 (05):
  • [9] SPARSE SIGNAL RECONSTRUCTION VIA THE APPROXIMATIONS OF l0 QUASINORM
    Wang, Jun
    Wang, Xing Tao
    JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION, 2020, 16 (04) : 1907 - 1925
  • [10] L1-Minimization Algorithms for Sparse Signal Reconstruction Based on a Projection Neural Network
    Liu, Qingshan
    Wang, Jun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (03) : 698 - 707