A DISTRIBUTED DOUGLAS-RACHFORD SPLITTING METHOD FOR SOLVING LINEAR CONSTRAINED MULTI-BLOCK WEAKLY CONVEX PROBLEMS

被引:0
作者
Hu, Leyu [1 ]
Xie, Jiaxin [1 ]
Cai, Xingju [2 ]
Han, Deren [1 ]
机构
[1] Beihang Univ, Sch Math Sci, Beijing 100191, Peoples R China
[2] Nanjing Normal Univ, Sch Math Sci, Nanjing 210023, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-block problems; weakly convex; parallel algorithm; distributed Douglas-Rachford splitting method; error bound; linear convergence rate; SMOOTHING L-1/2 REGULARIZATION; AUGMENTED LAGRANGIAN METHOD; CONVERGENCE ANALYSIS; GRADIENT-METHOD; MINIMIZATION; ADMM; DECOMPOSITION; ALGORITHM;
D O I
10.3934/ipi.2024048
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
In recent years, a distributed Douglas-Rachford splitting method (DDRSM) has been proposed to tackle multi-block separable convex optimization problems. This algorithm offers relatively easier subproblems and greater efficiency for large-scale problems compared to various augmented-Lagrangianbased parallel algorithms. Building upon this, we explore the extension of DDRSM to weakly convex cases. By assuming weak convexity of the objective function and introducing an error bound assumption, we demonstrate the linear convergence rate of DDRSM. Some promising numerical experiments involving compressed sensing and robust alignment of structures across images (RASL) show that DDRSM has advantages over augmented-Lagrangian-based algorithms, even in weakly convex scenarios.
引用
收藏
页码:632 / 659
页数:27
相关论文
共 43 条
  • [1] On the convergence of the direct extension of ADMM for three-block separable convex minimization models with one strongly convex function
    Cai, Xingju
    Han, Deren
    Yuan, Xiaoming
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2017, 66 (01) : 39 - 73
  • [2] On the Convergence of a Distributed Augmented Lagrangian Method for Nonconvex Optimization
    Chatzipanagiotis, Nikolaos
    Zavlanos, Michael M.
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (09) : 4405 - 4420
  • [3] The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
    Chen, Caihua
    He, Bingsheng
    Ye, Yinyu
    Yuan, Xiaoming
    [J]. MATHEMATICAL PROGRAMMING, 2016, 155 (1-2) : 57 - 79
  • [4] A PROXIMAL-BASED DECOMPOSITION METHOD FOR CONVEX MINIMIZATION PROBLEMS
    CHEN, G
    TEBOULLE, M
    [J]. MATHEMATICAL PROGRAMMING, 1994, 64 (01) : 81 - 101
  • [5] Bregman operator splitting with variable stepsize for total variation image reconstruction
    Chen, Yunmei
    Hager, William W.
    Yashtini, Maryam
    Ye, Xiaojing
    Zhang, Hongchao
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2013, 54 (02) : 317 - 342
  • [6] Parallel Multi-Block ADMM with o(1 / k) Convergence
    Deng, Wei
    Lai, Ming-Jun
    Peng, Zhimin
    Yin, Wotao
    [J]. JOURNAL OF SCIENTIFIC COMPUTING, 2017, 71 (02) : 712 - 736
  • [7] Facchinei Francisco., 2003, FINITE DIMENSIONAL V, V2, DOI DOI 10.1007/B97543
  • [8] Convergence of online gradient method for feedforward neural networks with smoothing L1/2 regularization penalty
    Fan, Qinwei
    Zurada, Jacek M.
    Wu, Wei
    [J]. NEUROCOMPUTING, 2014, 131 : 208 - 216
  • [9] A note on the Douglas-Rachford splitting method for optimization problems involving hypoconvex functions
    Guo, Ke
    Han, Deren
    [J]. JOURNAL OF GLOBAL OPTIMIZATION, 2018, 72 (03) : 431 - 441
  • [10] CONVERGENCE ANALYSIS OF DOUGLAS-RACHFORD SPLITTING METHOD FOR "STRONGLY plus WEAKLY" CONVEX PROGRAMMING
    Guo, Ke
    Han, Deren
    Yuan, Xiaoming
    [J]. SIAM JOURNAL ON NUMERICAL ANALYSIS, 2017, 55 (04) : 1549 - 1577