Gradient-free method for nonsmooth distributed optimization

被引:35
作者
Li, Jueyou [1 ,2 ]
Wu, Changzhi [3 ]
Wu, Zhiyou [1 ]
Long, Qiang [4 ]
机构
[1] Chongqing Normal Univ, Sch Math, Chongqing 400047, Peoples R China
[2] Federat Univ Australia, Sch Sci Informat Technol & Engn, Ballarat, Vic 3353, Australia
[3] Curtin Univ, Sch Built Environm, Bentley, WA 6102, Australia
[4] Southwest Univ Sci & Technol, Sch Sci, Mianyang 621010, Sichuan, Peoples R China
基金
芬兰科学院;
关键词
Distributed algorithm; Gaussian smoothing; Gradient-free method; Convex optimization; SUBGRADIENT METHODS; CONSENSUS;
D O I
10.1007/s10898-014-0174-2
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
In this paper, we consider a distributed nonsmooth optimization problem over a computational multi-agent network. We first extend the (centralized) Nesterov's random gradient-free algorithm and Gaussian smoothing technique to the distributed case. Then, the convergence of the algorithm is proved. Furthermore, an explicit convergence rate is given in terms of the network size and topology. Our proposed method is free of gradient, which may be preferred by practical engineers. Since only the cost function value is required, our method may suffer a factor up to (the dimension of the agent) in convergence rate over that of the distributed subgradient-based methods in theory. However, our numerical simulations show that for some nonsmooth problems, our method can even achieve better performance than that of subgradient-based methods, which may be caused by the slow convergence in the presence of subgradient.
引用
收藏
页码:325 / 340
页数:16
相关论文
共 23 条
[1]  
[Anonymous], 2011, TECHNICAL REPORT
[2]   Piecewise partially separable functions and a derivative-free algorithm for large scale nonsmooth optimization [J].
Bagirov, Adil M. ;
Ugon, Julien .
JOURNAL OF GLOBAL OPTIMIZATION, 2006, 35 (02) :163-195
[3]   New horizons in sphere-packing theory, part II: lattice-based derivative-free optimization via global surrogates [J].
Belitz, Paul ;
Bewley, Thomas .
JOURNAL OF GLOBAL OPTIMIZATION, 2013, 56 (01) :61-91
[4]  
Bertsekas D., 2015, Parallel and distributed computation: numerical methods
[5]   Incremental proximal methods for large scale convex optimization [J].
Bertsekas, Dimitri P. .
MATHEMATICAL PROGRAMMING, 2011, 129 (02) :163-195
[6]   Randomized gossip algorithms [J].
Boyd, Stephen ;
Ghosh, Arpita ;
Prabhakar, Balaji ;
Shah, Devavrat .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (06) :2508-2530
[7]   RANDOMIZED SMOOTHING FOR STOCHASTIC OPTIMIZATION [J].
Duchi, John C. ;
Bartlett, Peter L. ;
Wainwright, Martin J. .
SIAM JOURNAL ON OPTIMIZATION, 2012, 22 (02) :674-701
[8]   Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling [J].
Duchi, John C. ;
Agarwal, Alekh ;
Wainwright, Martin J. .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2012, 57 (03) :592-606
[9]  
Horn R. A., 2012, Matrix Analysis
[10]   A RANDOMIZED INCREMENTAL SUBGRADIENT METHOD FOR DISTRIBUTED OPTIMIZATION IN NETWORKED SYSTEMS [J].
Johansson, Bjorn ;
Rabi, Maben ;
Johansson, Mikael .
SIAM JOURNAL ON OPTIMIZATION, 2009, 20 (03) :1157-1170