SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization

被引:0
作者
Wai, Hoi-To [1 ]
Freris, Nikolaos M. [2 ,3 ]
Nedic, Angelia [1 ]
Scaglione, Anna [1 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85281 USA
[2] New York Univ Abu Dhabi, Div Engn, Abu Dhabi, U Arab Emirates
[3] NYU, Tandon Sch Engn, Brooklyn, NY USA
来源
2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC) | 2018年
关键词
Distributed optimization; Incremental methods; Asynchronous algorithms; Randomized algorithms; Multi-agent systems; Machine learning; SUBGRADIENT METHODS; CLOCKS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gradient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We analyze our method under the general asynchronous model of computation, in which each function is selected infinitely often with possibly unbounded (but sublinear) delay. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on an undirected communication graph. We show that our analysis applies as long as the graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merits of our algorithm over existing methods.
引用
收藏
页码:1751 / 1756
页数:6
相关论文
共 50 条
[31]   Distributed gradient-free and projection-free algorithm for stochastic constrained optimization [J].
Hou J. ;
Zeng X. ;
Chen C. .
Autonomous Intelligent Systems, 2024, 4 (01)
[32]   Nonlinear Optimization Method Based on Stochastic Gradient Descent for Fast Convergence [J].
Watanabe, Takahiro ;
Iima, Hitoshi .
2018 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2018, :4198-4203
[33]   DISTRIBUTED PROXIMAL-GRADIENT METHOD FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS [J].
Li, Jueyou ;
Wu, Changzhi ;
Wu, Zhiyou ;
Long, Qiang ;
Wang, Xiangyu .
ANZIAM JOURNAL, 2014, 56 (02) :160-178
[34]   Stochastic mirror descent method for distributed multi-agent optimization [J].
Li, Jueyou ;
Li, Guoquan ;
Wu, Zhiyou ;
Wu, Changzhi .
OPTIMIZATION LETTERS, 2018, 12 (06) :1179-1197
[35]   A decentralized Nesterov gradient method for stochastic optimization over unbalanced directed networks [J].
Hu, Jinhui ;
Xia, Dawen ;
Cheng, Huqiang ;
Feng, Liping ;
Ji, Lianghao ;
Guo, Jing ;
Li, Huaqing .
ASIAN JOURNAL OF CONTROL, 2022, 24 (02) :576-593
[36]   Adaptive Stochastic Gradient Descent Method for Convex and Non-Convex Optimization [J].
Chen, Ruijuan ;
Tang, Xiaoquan ;
Li, Xiuting .
FRACTAL AND FRACTIONAL, 2022, 6 (12)
[37]   DAdam: A Consensus-Based Distributed Adaptive Gradient Method for Online Optimization [J].
Nazari, Parvin ;
Tarzanagh, Davoud Ataee ;
Michailidis, George .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2022, 70 :6065-6079
[38]   An Approximate Distributed Gradient Estimation Method for Networked System Optimization Under Limited Communications [J].
Wang, Jing ;
Pham, Khanh D. .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2020, 50 (12) :5142-5151
[39]   Gradient-Consensus Method for Distributed Optimization in Directed Multi-Agent Networks [J].
Khatana, Vivek ;
Saraswat, Govind ;
Patel, Sourav ;
Salapaka, Murti, V .
2020 AMERICAN CONTROL CONFERENCE (ACC), 2020, :4689-4694
[40]   A gradient-free distributed optimization method for convex sum of nonconvex cost functions [J].
Pang, Yipeng ;
Hu, Guoqiang .
INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2022, 32 (14) :8086-8101