Stochastic Primal-Dual Proximal ExtraGradient descent for compositely regularized optimization

被引:6
|
作者
Lin, Tianyi [1 ]
Qiao, Linbo [2 ]
Zhang, Teng [3 ]
Feng, Jiashi [4 ]
Zhang, Bofeng [2 ]
机构
[1] Univ Calif Berkeley, Dept Ind Engn & Operat Res, Berkeley, CA USA
[2] Natl Univ Def Technol, Coll Comp, Changsha, Hunan, Peoples R China
[3] Stanford Univ, Dept Management Sci & Engn, Stanford, CA 94305 USA
[4] Natl Univ Singapore, Dept ECE, Singapore, Singapore
关键词
Compositely regularized optimization; Stochastic Primal-Dual Proximal; ExtraGradient descent; SADDLE-POINT; COMPLEXITY; INEQUALITIES;
D O I
10.1016/j.neucom.2017.07.066
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider a wide range of regularized stochastic minimization problems with two regularization terms, one of which is composed with a linear function. This optimization model abstracts a number of important applications in artificial intelligence and machine learning, such as fused Lasso, fused logistic regression, and a class of graph-guided regularized minimization. The computational challenges of this model are in two folds. On one hand, the closed-form solution of the proximal mapping associated with the composed regularization term or the expected objective function is not available. On the other hand, the calculation of the full gradient of the expectation in the objective is very expensive when the number of input data samples is considerably large. To address these issues, we propose a stochastic variant of extra-gradient type methods, namely Stochastic Primal-Dual Proximal ExtraGradient descent (SPDPEG), and analyze its convergence property for both convex and strongly convex objectives. For general convex objectives, the uniformly average iterates generated by SPDPEG converge in expectation with O (1/root t) rate. While for strongly convex objectives, the uniformly and non-uniformly average iterates generated by SPDPEG converge with O (log (t)/t) and O (1/t) rates, respectively. The order of the rate of the proposed algorithm is known to match the best convergence rate for first-order stochastic algorithms. Experiments on fused logistic regression and graph-guided regularized logistic regression problems show that the proposed algorithm performs very efficiently and consistently outperforms other competing algorithms. (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:516 / 525
页数:10
相关论文
共 50 条
  • [31] A Distributed Proximal Primal-Dual Algorithm for Nonsmooth Optimization with Coupling Constraints
    Wu, Xuyang
    Wang, He
    Lu, Jie
    2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2020, : 3657 - 3662
  • [32] Primal-Dual Proximal Algorithms for Structured Convex Optimization: A Unifying Framework
    Latafat, Puya
    Patrinos, Panagiotis
    LARGE-SCALE AND DISTRIBUTED OPTIMIZATION, 2018, 2227 : 97 - 120
  • [33] A fully stochastic primal-dual algorithm
    Pascal Bianchi
    Walid Hachem
    Adil Salim
    Optimization Letters, 2021, 15 : 701 - 710
  • [34] A fully stochastic primal-dual algorithm
    Bianchi, Pascal
    Hachem, Walid
    Salim, Adil
    OPTIMIZATION LETTERS, 2021, 15 (02) : 701 - 710
  • [35] Primal, dual and primal-dual partitions in continuous linear optimization
    Goberna, M. A.
    Todorov, M. I.
    OPTIMIZATION, 2007, 56 (5-6) : 617 - 628
  • [36] INERTIAL, CORRECTED, PRIMAL-DUAL PROXIMAL SPLITTING
    Valkonen, Tuomo
    SIAM JOURNAL ON OPTIMIZATION, 2020, 30 (02) : 1391 - 1420
  • [37] NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization
    Hajinezhad, Davood
    Hong, Mingyi
    Zhao, Tuo
    Wang, Zhaoran
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [38] A Stochastic Primal-Dual Method for Optimization with Conditional Value at Risk Constraints
    Avinash N. Madavan
    Subhonmesh Bose
    Journal of Optimization Theory and Applications, 2021, 190 : 428 - 460
  • [39] PRIMAL-DUAL STOCHASTIC SUBGRADIENT METHOD FOR LOG-DETERMINANT OPTIMIZATION
    Wu, Songwei
    Yu, Hang
    Dauwels, Justin
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3117 - 3121
  • [40] Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods
    Jin, Yujia
    Sidford, Aaron
    Tian, Kevin
    CONFERENCE ON LEARNING THEORY, VOL 178, 2022, 178