Effective Proximal Methods for Non-convex Non-smooth Regularized Learning

被引:0
|
作者
Liang, Guannan [1 ]
Tong, Qianqian [1 ]
Ding, Jiahao [2 ]
Pan, Miao [2 ]
Bi, Jinbo [1 ]
机构
[1] Univ Connecticut, Storrs, CT 06269 USA
[2] Univ Houston, Houston, TX 77004 USA
基金
美国国家科学基金会;
关键词
Stochastic algorithm; proximal methods; arbitrary sampling; VARIABLE SELECTION;
D O I
10.1109/ICDM50108.2020.00043
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse learning is a very important tool for mining useful information and patterns from high dimensional data. Non-convex non-smooth regularized learning problems play essential roles in sparse learning, and have drawn extensive attentions recently. We design a family of stochastic proximal gradient methods by applying arbitrary sampling to solve the empirical risk minimization problem with a non-convex and non-smooth regularizer. These methods draw mini-batches of training examples according to an arbitrary probability distribution when computing stochastic gradients. A unified analytic approach is developed to examine the convergence and computational complexity of these methods, allowing us to compare the different sampling schemes. We show that the independent sampling scheme tends to improve performance over the commonly-used uniform sampling scheme. Our new analysis also derives a tighter bound on convergence speed for the uniform sampling than the best one available so far. Empirical evaluations demonstrate that the proposed algorithms converge faster than the state of the art.
引用
收藏
页码:342 / 351
页数:10
相关论文
共 50 条
  • [1] Inertial Block Proximal Methods For Non-Convex Non-Smooth Optimization
    Le Thi Khanh Hien
    Gillis, Nicolas
    Patrinos, Panagiotis
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [2] Inexact Proximal Gradient Methods for Non-Convex and Non-Smooth Optimization
    Gu, Bin
    Wang, De
    Huo, Zhouyuan
    Huang, Heng
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3093 - 3100
  • [3] Simple Stochastic Gradient Methods for Non-Smooth Non-Convex Regularized Optimization
    Metel, Michael R.
    Takeda, Akiko
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [4] Stochastic Proximal Methods for Non-Smooth Non-Convex Constrained Sparse Optimization
    Metel, Michael R.
    Takeda, Akiko
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [5] Stochastic proximal methods for non-smooth non-convex constrained sparse optimization
    Metel, Michael R.
    Takeda, Akiko
    Journal of Machine Learning Research, 2021, 22
  • [6] Non-asymptotic Analysis of Stochastic Methods for Non-Smooth Non-Convex Regularized Problems
    Xu, Yi
    Jin, Rong
    Yang, Tianbao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [7] Fast Proximal Gradient Descent for A Class of Non-convex and Non-smooth Sparse Learning Problems
    Yang, Yingzhen
    Yu, Jiahui
    35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 1253 - 1262
  • [8] A proximal gradient method for control problems with non-smooth and non-convex control cost
    Natemeyer, Carolin
    Wachsmuth, Daniel
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2021, 80 (02) : 639 - 677
  • [9] Primal–Dual Proximal Splitting and Generalized Conjugation in Non-smooth Non-convex Optimization
    Christian Clason
    Stanislav Mazurenko
    Tuomo Valkonen
    Applied Mathematics & Optimization, 2021, 84 : 1239 - 1284
  • [10] A proximal gradient method for control problems with non-smooth and non-convex control cost
    Carolin Natemeyer
    Daniel Wachsmuth
    Computational Optimization and Applications, 2021, 80 : 639 - 677