Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization

被引:0
|
作者
Zhang, Yuchen [1 ]
Xiao, Lin [2 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
[2] Microsoft Res, Redmond, WA 98052 USA
关键词
empirical risk minimization; randomized algorithms; convex-concave saddle point problems; primal-dual algorithms; computational complexity; ALTERNATING DIRECTION METHOD; GRADIENT-METHOD; OPTIMIZATION; APPROXIMATION; ACCELERATION; ALGORITHMS; ONLINE; ASCENT;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variables. An extrapolation step on the primal variables is performed to obtain accelerated convergence rate. We also develop a minibatch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods.
引用
收藏
页数:42
相关论文
共 50 条
  • [1] Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
    Zhang, Yuchen
    Xiao, Lin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 37, 2015, 37 : 353 - 361
  • [2] Distributed Primal-Dual Proximal Method for Regularized Empirical Risk Minimization
    Khuzani, Masoud Badiei
    2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2018, : 938 - 945
  • [3] Doubly Greedy Primal-Dual Coordinate Descent for Sparse Empirical Risk Minimization
    Lei, Qi
    Yen, Ian E. H.
    Wu, Chao-yuan
    Dhillon, Inderjit S.
    Ravikumar, Pradeep
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [4] On Stochastic Primal-Dual Hybrid Gradient Approach for Compositely Regularized Minimization
    Qiao, Linbo
    Lin, Tianyi
    Jiang, Yu-Gang
    Yang, Fan
    Liu, Wei
    Lu, Xicheng
    ECAI 2016: 22ND EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, 285 : 167 - 174
  • [5] Stochastic Primal-Dual Method for Empirical Risk Minimization with O(1) Per-Iteration Complexity
    Tan, Conghui
    Zhang, Tong
    Ma, Shiqian
    Liu, Ji
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [6] Distributed Regularized Primal-Dual Method
    Badiei, Masoud
    Li, Na
    2016 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2016, : 540 - 544
  • [7] A primal-dual algorithm for risk minimization
    Kouri, Drew P.
    Surowiec, Thomas M.
    MATHEMATICAL PROGRAMMING, 2022, 193 (01) : 337 - 363
  • [8] A STOCHASTIC COORDINATE DESCENT PRIMAL-DUAL ALGORITHM AND APPLICATIONS
    Bianchi, Pascal
    Hachem, Walid
    Franck, Iutzeler
    2014 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2014,
  • [9] Adaptive coordinate sampling for stochastic primal-dual optimization
    Liu, Huikang
    Wang, Xiaolu
    So, Anthony Man-Cho
    INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, 2022, 29 (01) : 24 - 47
  • [10] Block-Coordinate Primal-Dual Method for Nonsmooth Minimization over Linear Constraints
    Luke, D. Russell
    Malitsky, Yura
    LARGE-SCALE AND DISTRIBUTED OPTIMIZATION, 2018, 2227 : 121 - 147