Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization

被引:9
作者
Zhang, Junyu [1 ]
Xiao, Lin [2 ]
机构
[1] Princeton Univ, Dept Elect Engn, Princeton, NJ 08544 USA
[2] Facebook AI Res FAIR, Seattle, WA 98109 USA
关键词
Stochastic composite optimization; Nonsmooth optimization; Variance reduction; Prox-linear algorithm; Sample complexity; MINIMIZATION; NONSMOOTH;
D O I
10.1007/s10107-021-01709-z
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We consider the problem of minimizing composite functions of the form f(g(x))+h(x), where f and h are convex functions (which can be nonsmooth) and g is a smooth vector mapping. In addition, we assume that g is the average of finite number of component mappings or the expectation over a family of random component mappings. We propose a class of stochastic variance-reduced prox-linear algorithms for solving such problems and bound their sample complexities for finding an epsilon-stationary point in terms of the total number of evaluations of the component mappings and their Jacobians. When g is a finite average of N components, we obtain sample complexity O(N+N-4/5 epsilon(-1)) for both mapping and Jacobian evaluations. When g is a general expectation, we obtain sample complexities of O(epsilon(-5/2)) and O(epsilon(-3/2)) for component mappings and their Jacobians respectively. If in addition f is smooth, then improved sample complexities of O(N+N-1/2 epsilon(-1)) and O(epsilon(-3/2)) are derived for g being a finite average and a general expectation respectively, for both component mapping and Jacobian evaluations.
引用
收藏
页码:649 / 691
页数:43
相关论文
共 60 条