Distributed Saddle Point Problems for Strongly Concave-Convex Functions

被引:3
|
作者
Qureshi, Muhammad I. [1 ]
Khan, Usman A. [1 ]
机构
[1] Tufts Univ, Elect & Comp Engn Dept, Medford, MA 02155 USA
关键词
Decentralized optimization; saddle point problems; constrained optimization; descent ascent methods; GRADIENT METHODS; OPTIMIZATION; CONVERGENCE; ALGORITHMS; EXTRA;
D O I
10.1109/TSIPN.2023.3317807
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this article, we propose GT-GDA, a distributed optimization method to solve saddle point problems of the form: min(x) max(y){F(x, y) := G(x) + (y, Px) - H(y)}, where the functions G(<middle dot>), H(<middle dot>), and the coupling matrix P are distributed over a strongly connected network of nodes. GT-GDA is a first-order method that uses gradient tracking to eliminate the dissimilarity caused by heterogeneous data distribution among the nodes. In the most general form, GT-GDA includes a consensus over the local coupling matrices to achieve the optimal (unique) saddle point, however, at the expense of increased communication. To avoid this, we propose a more efficient variant GT-GDA-Lite that does not incur additional communication and analyze its convergence in various scenarios. We show that GT-GDA converges linearly to the unique saddle point solution when G is smooth and convex, H is smooth and strongly convex, and the global coupling matrix P has full column rank. We further characterize the regime under which GT-GDA exhibits a network topology-independent convergence behavior. We next show the linear convergence of GTGDA-Lite to an error around the unique saddle point, which goes to zero when the coupling cost (y, Px) is common to all nodes, or when G and H are quadratic. Numerical experiments illustrate the convergence properties and importance of GT-GDA and GT-GDA-Lite for several applications.
引用
收藏
页码:679 / 690
页数:12
相关论文
共 50 条