Inertial primal-dual projection neurodynamic approaches for constrained convex optimization problems and application to sparse recovery

被引:0
作者
Zhao, You [1 ]
Allen-Zhao, Zhihua [2 ]
Wang, Lei [3 ]
He, Xing [1 ]
Mao, Qin [4 ]
机构
[1] Southwest Univ, Coll Elect & Informat Engn, Chongqing Key Lab Nonlinear Circuits & Intelligent, El Paso, TX, Peoples R China
[2] Xidian Univ, Sch Math & Stat, Xian, Peoples R China
[3] Penn State Univ, Dept Stat, University Pk, PA USA
[4] Qiannan Normal Univ Nationalities, Sch Comp & Informat Technol, Key Lab Complex Syst & Intelligent Optimizat Guizh, Duyun 558000, Peoples R China
基金
中国国家自然科学基金;
关键词
Neurodynamic approaches; Inertial primal-dual; Projection operator; Time scaling; Sparse signal reconstruction; NEURAL-NETWORK; QUADRATIC OPTIMIZATION; CONVERGENCE; MINIMIZATION; DYNAMICS;
D O I
10.1016/j.neunet.2025.107274
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Second-order (inertial) neurodynamic approaches are excellent tools for solving convex optimization problems in an accelerated manner, while the majority of existing approaches to neurodynamic approaches focus on unconstrained and simple constrained convex optimization problems. This paper presents a centralized primal- dual projection neurodynamic approach with time scaling (CPDPNA-TS). Built upon the heavy- ball method, this approach is tailored for convex optimization problems characterized by set and affine constraints, which contains a second-order projection ODE (ordinary differential equation) with derivative feedback for the primal variables and a first-order ODE for the dual variables. We prove a strong global solution to CPDPNA-TS in terms of existence, uniqueness and feasibility. Subsequently, we demonstrate that CPDPNA-TS has a nonergodic ( 1 ) exponential and an ergodic O convergence properties when choosing suitable time scaling parameters, without strong convexity assumption on the objective functions. In addition, we extend the CPDPNA-TS to a case that CPDPNA-TS with a small perturbation and a case that has a distributed framework, and prove that two versions of the extension enjoy the similar convergence properties of CPDPNA-TS. Finally, we perform numerical experiments on sparse recovery in order to illustrate the effectiveness and superiority of the presented projection neurodynamic approaches.
引用
收藏
页数:13
相关论文
共 44 条
  • [1] Attouch H, 2000, LECT NOTES ECON MATH, V48, P25
  • [2] Fast Convergence of Dynamical ADMM via Time Scaling of Damped Inertial Dynamics
    Attouch, Hedy
    Chbani, Zaki
    Fadili, Jalal
    Riahi, Hassan
    [J]. JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2022, 193 (1-3) : 704 - 736
  • [3] FAST PROXIMAL METHODS VIA TIME SCALING OF DAMPED INERTIAL DYNAMICS
    Attouch, Hedy
    Chbani, Zaki
    Riahi, Hassan
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2019, 29 (03) : 2227 - 2256
  • [4] Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity
    Attouch, Hedy
    Chbani, Zaki
    Peypouquet, Juan
    Redont, Patrick
    [J]. MATHEMATICAL PROGRAMMING, 2018, 168 (1-2) : 123 - 175
  • [5] Balhag A., 2020, Linear convergence of inertial gradient dynamics with constant viscous damping coefficient and time-dependent rescaling parameter
  • [6] Beck A, 2017, MOS-SIAM SER OPTIMIZ, P1, DOI 10.1137/1.9781611974997
  • [7] NEURAL NETWORK FOR QUADRATIC OPTIMIZATION WITH BOUND CONSTRAINTS
    BOUZERDOUM, A
    PATTISON, TR
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (02): : 293 - 304
  • [8] Distributed optimization and statistical learning via the alternating direction method of multipliers
    Boyd S.
    Parikh N.
    Chu E.
    Peleato B.
    Eckstein J.
    [J]. Foundations and Trends in Machine Learning, 2010, 3 (01): : 1 - 122
  • [9] BREZIS H., 1973, N HOLLAND MATH STUD, V5
  • [10] Representation recovery via L1-norm minimization with corrupted data
    Chai, Woon Huei
    Ho, Shen-Shyang
    Quek, Hiok Chai
    [J]. INFORMATION SCIENCES, 2022, 595 : 395 - 426