From inexact optimization to learning via gradient concentration

被引:3
|
作者
Stankewitz, Bernhard [1 ]
Muecke, Nicole [2 ]
Rosasco, Lorenzo [3 ,4 ,5 ]
机构
[1] Humboldt Univ, Dept Math, Linden 6, D-10099 Berlin, Germany
[2] Tech Univ Carolo Wilhelmina Braunschweig, Inst Math Stochast, Univ Pl 2, D-38106 Braunschweig, Lower Saxony, Germany
[3] Univ Genoa, DIBRIS, MaLGa, Via Dodecaneso 35, I-16146 Genoa, Italy
[4] MIT, CBMM, Genoa, Italy
[5] Inst Italiano Tecnol, Genoa, Italy
基金
欧洲研究理事会; 欧盟地平线“2020”;
关键词
Implicit regularization; Kernel methods; Statistical learning; CONVERGENCE; ALGORITHMS; REGRESSION;
D O I
10.1007/s10589-022-00408-5
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data. The ultimate goal of learning, however, is to minimize the error on future data (test error), for which the training data provides only partial information. In this view, the optimization problems that are practically feasible are based on inexact quantities that are stochastic in nature. In this paper, we show how probabilistic results, specifically gradient concentration, can be combined with results from inexact optimization to derive sharp test error guarantees. By considering unconstrained objectives, we highlight the implicit regularization properties of optimization for learning.
引用
收藏
页码:265 / 294
页数:30
相关论文
共 50 条
  • [21] Decentralized Inexact Proximal Gradient Method With Network-Independent Stepsizes for Convex Composite Optimization
    Guo, Luyao
    Shi, Xinli
    Cao, Jinde
    Wang, Zihao
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 : 786 - 801
  • [22] Variable Selection and Allocation in Joint Models via Gradient Boosting Techniques
    Griesbach, Colin
    Mayr, Andreas
    Bergherr, Elisabeth
    MATHEMATICS, 2023, 11 (02)
  • [23] An inexact proximal point method for quasiconvex multiobjective optimization
    Zhao, Xiaopeng
    Qi, Min
    Jolaoso, Lateef Olakunle
    Shehu, Yekini
    Yao, Jen-Chih
    Yao, Yonghong
    COMPUTATIONAL & APPLIED MATHEMATICS, 2024, 43 (05)
  • [24] INEXACT HALF-QUADRATIC OPTIMIZATION FOR IMAGE RECONSTRUCTION
    Robini, Marc
    Zhu, Yuemin
    Lv, Xudong
    Liu, Wanyu
    2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 3513 - 3517
  • [25] DISTRIBUTED OPTIMIZATION WITH INEXACT ORACLE
    Zhu, Kui
    Zhang, Yichen
    Tang, Yutao
    KYBERNETIKA, 2022, 58 (04) : 578 - 592
  • [26] Convergence properties of inexact projected gradient methods
    Wang, Changyu
    Liu, Qian
    OPTIMIZATION, 2006, 55 (03) : 301 - 310
  • [27] Learning Adaptive Differential Evolution Algorithm From Optimization Experiences by Policy Gradient
    Sun, Jianyong
    Liu, Xin
    Back, Thomas
    Xu, Zongben
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2021, 25 (04) : 666 - 680
  • [28] Inexact Half-Quadratic Optimization for Linear Inverse Problems
    Robini, Marc C.
    Yang, Feng
    Zhu, Yuemin
    SIAM JOURNAL ON IMAGING SCIENCES, 2018, 11 (02): : 1078 - 1133
  • [29] A gradient-related algorithm with inexact line searches
    Shi, ZJ
    Shen, J
    JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2004, 170 (02) : 349 - 370
  • [30] New proximal bundle algorithm based on the gradient sampling method for nonsmooth nonconvex optimization with exact and inexact information
    Monjezi, N. Hoseini
    Nobakhtian, S.
    NUMERICAL ALGORITHMS, 2023, 94 (02) : 765 - 787