Lagrange Programming Neural Network for Nondifferentiable Optimization Problems in Sparse Approximation

被引:50
作者
Feng, Ruibin [1 ]
Leung, Chi-Sing [1 ]
Constantinides, Anthony G. [2 ]
Zeng, Wen-Jun [1 ]
机构
[1] City Univ Hong Kong, Dept Elect Engn, Hong Kong, Hong Kong, Peoples R China
[2] Imperial Coll London, Commun & Signal Proc, London SW7 2AZ, England
关键词
Lagrange programming neural networks (LPNNs); locally competitive algorithm (LCA); optimization; VARIATIONAL-INEQUALITIES; CONSTRAINED OPTIMIZATION; QUADRATIC OPTIMIZATION; ATOMIC DECOMPOSITION; BOUND CONSTRAINTS; CONVERGENCE; SYSTEMS; MINIMIZATION; STABILITY; REAL;
D O I
10.1109/TNNLS.2016.2575860
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The major limitation of the Lagrange programming neural network (LPNN) approach is that the objective function and the constraints should be twice differentiable. Since sparse approximation involves nondifferentiable functions, the original LPNN approach is not suitable for recovering sparse signals. This paper proposes a new formulation of the LPNN approach based on the concept of the locally competitive algorithm (LCA). Unlike the classical LCA approach which is able to solve unconstrained optimization problems only, the proposed LPNN approach is able to solve the constrained optimization problems. Two problems in sparse approximation are considered. They are basis pursuit (BP) and constrained BP denoise (CBPDN). We propose two LPNN models, namely, BP-LPNN and CBPDN-LPNN, to solve these two problems. For these two models, we show that the equilibrium points of the models are the optimal solutions of the two problems, and that the optimal solutions of the two problems are the equilibrium points of the two models. Besides, the equilibrium points are stable. Simulations are carried out to verify the effectiveness of these two LPNN models.
引用
收藏
页码:2395 / 2407
页数:13
相关论文
共 53 条
  • [1] [Anonymous], OPTIMIZATION
  • [2] [Anonymous], 1993, Neural networks for optimization and signal processing
  • [3] Convergence and Rate Analysis of Neural Networks for Sparse Approximation
    Balavoine, Aurele
    Romberg, Justin
    Rozell, Christopher J.
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (09) : 1377 - 1389
  • [4] Balavoine A, 2011, 2011 IEEE DIGITAL SIGNAL PROCESSING WORKSHOP AND IEEE SIGNAL PROCESSING EDUCATION WORKSHOP (DSP/SPE), P431, DOI 10.1109/DSP-SPE.2011.5739253
  • [5] Microcode optimization with neural networks
    Bharitkar, S
    Tsuchiya, K
    Takefuji, Y
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (03): : 698 - 703
  • [6] NEURAL NETWORK FOR QUADRATIC OPTIMIZATION WITH BOUND CONSTRAINTS
    BOUZERDOUM, A
    PATTISON, TR
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (02): : 293 - 304
  • [7] Boyd S, 2004, CONVEX OPTIMIZATION
  • [8] From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
    Bruckstein, Alfred M.
    Donoho, David L.
    Elad, Michael
    [J]. SIAM REVIEW, 2009, 51 (01) : 34 - 81
  • [9] CANDES E., 2005, l1-magic: Recovery of sparse signals via convex programming
  • [10] Decoding by linear programming
    Candes, EJ
    Tao, T
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2005, 51 (12) : 4203 - 4215