A Fast Randomized Incremental Gradient Method for Decentralized Nonconvex Optimization

被引:19
作者
Xin, Ran [1 ]
Khan, Usman A. [2 ]
Kar, Soummya [1 ]
机构
[1] Carnegie Mellon Univ, Dept Elect & Comp Engn, Pittsburgh, PA 15232 USA
[2] Tufts Univ, Dept Elect & Comp Engn, Medford, MA 02155 USA
基金
美国国家科学基金会;
关键词
Convergence; Gradient methods; Optimization; Linear matrix inequalities; Stochastic processes; Robustness; Radio frequency; Decentralized nonconvex optimization; incremental gradient methods; variance reduction; DISTRIBUTED OPTIMIZATION; STOCHASTIC OPTIMIZATION; CONVERGENCE; ALGORITHMS; FRAMEWORK;
D O I
10.1109/TAC.2021.3122586
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we study decentralized nonconvex finite-sum minimization problems described over a network of nodes, where each node possesses a local batch of data samples. In this context, we analyze a single-timescale randomized incremental gradient method, called GT-SAGA. GT-SAGA is computationally efficient as it evaluates one component gradient per node per iteration and achieves provably fast and robust performance by leveraging node-level variance reduction and network-level gradient tracking. For general smooth nonconvex problems, we show the almost sure and mean-squared convergence of GT-SAGA to a first-order stationary point and further describe regimes of practical significance, where it outperforms the existing approaches and achieves a network topology-independent iteration complexity, respectively. When the global function satisfies the Polyak-Lojaciewisz condition, we show that GT-SAGA exhibits linear convergence to an optimal solution in expectation and describe regimes of practical interest where the performance is network topology independent and improves upon the existing methods. Numerical experiments are included to highlight the main convergence aspects of GT-SAGA in nonconvex settings.
引用
收藏
页码:5150 / 5165
页数:16
相关论文
共 63 条
[1]   Decentralized Proximal Gradient Algorithms With Linear Convergence Rates [J].
Alghunaim, Sulaiman A. ;
Ryu, Ernest K. ;
Yuan, Kun ;
Sayed, Ali H. .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (06) :2787-2794
[2]   Distributed Coupled Multiagent Stochastic Optimization [J].
Alghunaim, Sulaiman A. ;
Sayed, Ali H. .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2020, 65 (01) :175-190
[3]  
[Anonymous], 2017, Frontiers in Applied Mathematics and Statistics
[4]  
Assran M, 2019, PR MACH LEARN RES, V97
[5]  
Beck A., 2017, 1 ORDER METHODS OPTI, DOI 10.1137/1.9781611974997
[6]   Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks [J].
Chen, Jianshu ;
Sayed, Ali H. .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2012, 60 (08) :4289-4305
[7]   On Distributed Nonconvex Optimization: Projected Subgradient Method for Weakly Convex Problems in Networks [J].
Chen, Shixiang ;
Garcia, Alfredo ;
Shahrampour, Shahin .
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2022, 67 (02) :662-675
[8]  
Defazio A, 2014, ADV NEUR IN, V27
[9]   NEXT: In-Network Nonconvex Optimization [J].
Di Lorenzo, Paolo ;
Scutari, Gesualdo .
IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2016, 2 (02) :120-136
[10]  
Fang C, 2018, ADV NEUR IN, V31