A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks

被引:0
作者
Wu, Xuyang [1 ]
Sou, Kin Cheong [2 ]
Lu, Jie [3 ,4 ,5 ]
机构
[1] KTH Royal Inst Technol, Div Decis & Control Syst, Stockholm, Sweden
[2] Natl Sun Yat Sen Univ, Dept Elect Engn, Kaohsiung, Taiwan
[3] ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai, Peoples R China
[4] Shanghai Engn Res Ctr Energy Efficient & Custom AI, Shanghai, Peoples R China
[5] Room 1D-201B,SIST Bldg,393 Middle Huaxia Middle Rd, Shanghai 201210, Peoples R China
基金
中国国家自然科学基金;
关键词
Distributed optimization; decentralized optimization; dual optimization method; ALGORITHMS; CONVERGENCE;
D O I
10.1080/10556788.2023.2189713
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In this paper, we develop a regularized Fenchel dual gradient method (RFDGM), which allows nodes in a time-varying undirected network to find a common decision, in a fully distributed fashion, for minimizing the sum of their local objective functions subject to their local constraints. Different from most existing distributed optimization algorithms that also cope with time-varying networks, RFDGM is able to handle problems with general convex objective functions and distinct local constraints, and still has non-asymptotic conver-gence results. Specifically, under a standard network connectivity condition, we show that RFDGM is guaranteed to reach e-accuracy in both optimality and feasibility within O(1/e(2) ln 1/e) iterations. Such iteration complexity can be improved to O(1/e ln 1/e) if the local objective functions are strongly convex but not necessarily differentiable. Finally, simulation results demonstrate the competence of RFDGM in practice.
引用
收藏
页码:813 / 836
页数:24
相关论文
共 29 条
[1]  
Aybat NS, 2016, ADV NEUR IN, V29
[2]  
Bertsekas D. P., 1999, Nonlinear Programming
[3]   Robust computation of aggregates in wireless sensor networks: Distributed randomized algorithms and analysis [J].
School of Electrical and Computer Engineering, Purdue University, Box 165, West Lafayette, IN 47907, United States ;
不详 .
IEEE Trans Parallel Distrib Syst, 2006, 9 (987-1000) :987-1000
[4]   DOUBLE SMOOTHING TECHNIQUE FOR LARGE-SCALE LINEARLY CONSTRAINED CONVEX OPTIMIZATION [J].
Devolder, Olivier ;
Glineur, Francois ;
Nesterov, Yurii .
SIAM JOURNAL ON OPTIMIZATION, 2012, 22 (02) :702-727
[5]  
FIEDLER M, 1973, CZECH MATH J, V23, P298
[6]  
Hamedani EY, 2017, ANN ALLERTON CONF, P518, DOI 10.1109/ALLERTON.2017.8262781
[7]   MULTIUSER OPTIMIZATION: DISTRIBUTED ALGORITHMS AND ERROR ANALYSIS [J].
Koshal, Jayash ;
Nedic, Angelia ;
Shanbhag, Uday V. .
SIAM JOURNAL ON OPTIMIZATION, 2011, 21 (03) :1046-1081
[8]   DECENTRALIZED RESOURCE ALLOCATION IN DYNAMIC NETWORKS OF AGENTS [J].
Lakshmanan, Hariharan ;
De Farias, Daniela Pucci .
SIAM JOURNAL ON OPTIMIZATION, 2008, 19 (02) :911-940
[9]   Dual Averaging Push for Distributed Convex Optimization Over Time-Varying Directed Graph [J].
Liang, Shu ;
Wang, Le Yi ;
Yin, George .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2020, 65 (04) :1785-1791
[10]   Convergence rate analysis of distributed optimization with projected subgradient algorithm [J].
Liu, Shuai ;
Qiu, Zhirong ;
Xie, Lihua .
AUTOMATICA, 2017, 83 :162-169