A Flexible Stochastic Multi-Agent ADMM Method for Large-Scale Distributed Optimization

被引:1
作者
Wu, Lin [1 ,2 ]
Wang, Yongbin [1 ,2 ]
Shi, Tuo [3 ,4 ]
机构
[1] Minist Educ, Key Lab Convergent Media & Intelligent Technol, Beijing 100024, Peoples R China
[2] Commun Univ China, Sch Comp & Cyberspace Secur, Beijing 100024, Peoples R China
[3] Beijing Police Coll, Beijing 102202, Peoples R China
[4] Inst Sci & Tech Informat China, Beijing 100038, Peoples R China
关键词
Distributed optimization; ADMM; variance reduction; Hessian approximation; flexibility; ALTERNATING DIRECTION METHOD; CONVERGENCE;
D O I
10.1109/ACCESS.2021.3120017
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While applying stochastic alternating direction method of multiplier (ADMM) methods has become enormously potential in distributed applications, improving the algorithmic flexibility can bring huge benefits. In this paper, we propose a novel stochastic optimization method based on distributed ADMM method, called Flex-SADMM. Specifically, we incorporate the variance reduced first-order information and the approximated second-order information for solving the subproblem of ADMM, which targets at the stable convergence and improving the accuracy of the search direction. Moreover, different from most ADMM based methods that require each computation node to perform the update in each iteration, we only require each computation node updates within a bounded iteration interval, this has significantly improved the flexibility. We further provide the theoretical results to guarantee the convergence of Flex-SADMM in the nonconvex optimization problems. These results show that our proposed method can successfully overcome the above challenges while the computational complexity is maintained low. In the empirical study, we have verified the effectiveness and the improved flexibility of our proposed method.
引用
收藏
页码:19045 / 19059
页数:15
相关论文
共 43 条
[1]   Cubic regularization in symmetric rank-1 quasi-Newton methods [J].
Benson H.Y. ;
Shanno D.F. .
Mathematical Programming Computation, 2018, 10 (04) :457-486
[2]   Variational Inference: A Review for Statisticians [J].
Blei, David M. ;
Kucukelbir, Alp ;
McAuliffe, Jon D. .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2017, 112 (518) :859-877
[3]  
Bordes A, 2009, J MACH LEARN RES, V10, P1737
[4]   Large-Scale Machine Learning with Stochastic Gradient Descent [J].
Bottou, Leon .
COMPSTAT'2010: 19TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL STATISTICS, 2010, :177-186
[5]   Learning multi-label scene classification [J].
Boutell, MR ;
Luo, JB ;
Shen, XP ;
Brown, CM .
PATTERN RECOGNITION, 2004, 37 (09) :1757-1771
[6]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[7]  
Broyden C. G., 1970, Journal of the Institute of Mathematics and Its Applications, V6, P222
[8]  
Broyden C. G., 1973, Journal of the Institute of Mathematics and Its Applications, V12, P223
[9]   A Maximum A Posteriori Probability and Time-Varying Approach for Inferring Gene Regulatory Networks from Time Course Gene Microarray Data [J].
Chan, Shing-Chow ;
Zhang, Li ;
Wu, Ho-Chun ;
Tsui, Kai-Man .
IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2015, 12 (01) :123-135
[10]   Multi-Agent Distributed Optimization via Inexact Consensus ADMM [J].
Chang, Tsung-Hui ;
Hong, Mingyi ;
Wang, Xiangfeng .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (02) :482-497