On inexact stochastic splitting methods for a class of nonconvex composite optimization problems with relative error

被引:0
作者
Hu, Jia [1 ]
Han, Congying [1 ,2 ]
Guo, Tiande [1 ,2 ]
Zhao, Tong [1 ,2 ]
机构
[1] Univ Chinese Acad Sci, Sch Math Sci, 19A Yuquan Rd, Beijing 100049, Peoples R China
[2] Chinese Acad Sci, Key Lab Big Data Min & Knowledge Management, 80 Zhongguancun East Rd, Beijing 100190, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Nonsmooth nonconvex optimization; stochastic approximation; relative error; alternating direction method of multipliers; proximal gradient descent; variance reduction; ALTERNATING DIRECTION METHOD; PROXIMAL POINT ALGORITHM; CONVERGENCE; APPROXIMATION; EXTRAGRADIENT; MULTIPLIERS; COMPLEXITY; SELECTION;
D O I
10.1080/10556788.2022.2091562
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We consider minimizing a class of nonconvex composite stochastic optimization problems, and deterministic optimization problems whose objective function consists of an expectation function (or an average of finitely many smooth functions) and a weakly convex but potentially nonsmooth function. And in this paper, we focus on the theoretical properties of two types of stochastic splitting methods for solving these nonconvex optimization problems: stochastic alternating direction method of multipliers and stochastic proximal gradient descent. In particular, several inexact versions of these two types of splitting methods are studied. At each iteration, the proposed schemes inexactly solve their subproblems by using relative error criteria instead of exogenous and diminishing error rules, which allows our approaches to handle some complex regularized problems in statistics and machine learning. Under mild conditions, we obtain the convergence of the schemes and their computational complexity related to the evaluations on the component gradient of the smooth function, and find that some conclusions of their exact counterparts can be recovered.
引用
收藏
页码:1 / 33
页数:33
相关论文
共 40 条
[1]   An inexact proximal generalized alternating direction method of multipliers [J].
Adona, V. A. ;
Goncalves, M. L. N. ;
Melo, J. G. .
COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2020, 76 (03) :621-647
[2]  
[Anonymous], 2013, PMLR
[3]   Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods [J].
Attouch, Hedy ;
Bolte, Jerome ;
Svaiter, Benar Fux .
MATHEMATICAL PROGRAMMING, 2013, 137 (1-2) :91-129
[4]   On Proximal Subgradient Splitting Method for Minimizing the sum of two Nonsmooth Convex Functions [J].
Cruz, Jose Yunier Bello .
SET-VALUED AND VARIATIONAL ANALYSIS, 2017, 25 (02) :245-263
[5]  
Defazio A., 2014, Advances in Neural Information Processing Systems, P1646
[6]   A GENERALIZED PROXIMAL POINT ALGORITHM FOR CERTAIN NONCONVEX MINIMIZATION PROBLEMS [J].
FUKUSHIMA, M ;
MINE, H .
INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE, 1981, 12 (08) :989-1000
[7]  
Gabay D., 1976, Computers & Mathematics with Applications, V2, P17, DOI 10.1016/0898-1221(76)90003-1
[8]   On the Information-Adaptive Variants of the ADMM: An Iteration Complexity Perspective [J].
Gao, Xiang ;
Jiang, Bo ;
Zhang, Shuzhong .
JOURNAL OF SCIENTIFIC COMPUTING, 2018, 76 (01) :327-363
[9]   Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization [J].
Ghadimi, Saeed ;
Lan, Guanghui ;
Zhang, Hongchao .
MATHEMATICAL PROGRAMMING, 2016, 155 (1-2) :267-305
[10]   OPTIMAL STOCHASTIC APPROXIMATION ALGORITHMS FOR STRONGLY CONVEX STOCHASTIC COMPOSITE OPTIMIZATION I: A GENERIC ALGORITHMIC FRAMEWORK [J].
Ghadimi, Saeed ;
Lan, Guanghui .
SIAM JOURNAL ON OPTIMIZATION, 2012, 22 (04) :1469-1492