From inexact optimization to learning via gradient concentration

被引:3
|
作者
Stankewitz, Bernhard [1 ]
Muecke, Nicole [2 ]
Rosasco, Lorenzo [3 ,4 ,5 ]
机构
[1] Humboldt Univ, Dept Math, Linden 6, D-10099 Berlin, Germany
[2] Tech Univ Carolo Wilhelmina Braunschweig, Inst Math Stochast, Univ Pl 2, D-38106 Braunschweig, Lower Saxony, Germany
[3] Univ Genoa, DIBRIS, MaLGa, Via Dodecaneso 35, I-16146 Genoa, Italy
[4] MIT, CBMM, Genoa, Italy
[5] Inst Italiano Tecnol, Genoa, Italy
基金
欧洲研究理事会; 欧盟地平线“2020”;
关键词
Implicit regularization; Kernel methods; Statistical learning; CONVERGENCE; ALGORITHMS; REGRESSION;
D O I
10.1007/s10589-022-00408-5
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data. The ultimate goal of learning, however, is to minimize the error on future data (test error), for which the training data provides only partial information. In this view, the optimization problems that are practically feasible are based on inexact quantities that are stochastic in nature. In this paper, we show how probabilistic results, specifically gradient concentration, can be combined with results from inexact optimization to derive sharp test error guarantees. By considering unconstrained objectives, we highlight the implicit regularization properties of optimization for learning.
引用
收藏
页码:265 / 294
页数:30
相关论文
共 50 条
  • [41] Inexact Gradient Projection and Fast Data Driven Compressed Sensing
    Golbabaee, Mohammad
    Davies, Mike E.
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2018, 64 (10) : 6707 - 6721
  • [42] ON THE COMPLEXITY OF AN INEXACT RESTORATION METHOD FOR CONSTRAINED OPTIMIZATION
    Bueno, Luis Felipe
    Martinez, Jose Mario
    SIAM JOURNAL ON OPTIMIZATION, 2020, 30 (01) : 80 - 101
  • [43] Descentwise inexact proximal algorithms for smooth optimization
    Fuentes, Marc
    Malick, Jerome
    Lemarechal, Claude
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2012, 53 (03) : 755 - 769
  • [44] A Hybrid and Inexact Algorithm for Nonconvex and Nonsmooth Optimization
    Wang, Yiyang
    Song, Xiaoliang
    JOURNAL OF SYSTEMS SCIENCE & COMPLEXITY, 2024,
  • [45] An inexact modified subgradient algorithm for nonconvex optimization
    Burachik, Regina S.
    Kaya, C. Yalcin
    Mammadov, Musa
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2010, 45 (01) : 1 - 24
  • [46] Accelerating inexact successive quadratic approximation for regularized optimization through manifold identification
    Lee, Ching-pei
    MATHEMATICAL PROGRAMMING, 2023, 201 (1-2) : 599 - 633
  • [47] Stability and optimization error of stochastic gradient descent for pairwise learning
    Shen, Wei
    Yang, Zhenhuan
    Ying, Yiming
    Yuan, Xiaoming
    ANALYSIS AND APPLICATIONS, 2020, 18 (05) : 887 - 927
  • [48] Evolutionary Dynamic Multiobjective Optimization via Learning From Historical Search Process
    Zhao, Qi
    Yan, Bai
    Shi, Yuhui
    Middendorf, Martin
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (07) : 6119 - 6130
  • [49] An Augmented Lagrangian Trust-Region Method With Inexact Gradient Evaluations to Accelerate Constrained Optimization Problems Using Model Hyperreduction
    Wen, Tianshu
    Zahr, Matthew J.
    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, 2024, : 621 - 645
  • [50] Distributed consensus-based multi-agent convex optimization via gradient tracking technique
    Li, Huaqing
    Zhang, Hao
    Wang, Zheng
    Zhu, Yifan
    Han, Qi
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2019, 356 (06): : 3733 - 3761