From inexact optimization to learning via gradient concentration

被引:3
|
作者
Stankewitz, Bernhard [1 ]
Muecke, Nicole [2 ]
Rosasco, Lorenzo [3 ,4 ,5 ]
机构
[1] Humboldt Univ, Dept Math, Linden 6, D-10099 Berlin, Germany
[2] Tech Univ Carolo Wilhelmina Braunschweig, Inst Math Stochast, Univ Pl 2, D-38106 Braunschweig, Lower Saxony, Germany
[3] Univ Genoa, DIBRIS, MaLGa, Via Dodecaneso 35, I-16146 Genoa, Italy
[4] MIT, CBMM, Genoa, Italy
[5] Inst Italiano Tecnol, Genoa, Italy
基金
欧洲研究理事会; 欧盟地平线“2020”;
关键词
Implicit regularization; Kernel methods; Statistical learning; CONVERGENCE; ALGORITHMS; REGRESSION;
D O I
10.1007/s10589-022-00408-5
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Optimization in machine learning typically deals with the minimization of empirical objectives defined by training data. The ultimate goal of learning, however, is to minimize the error on future data (test error), for which the training data provides only partial information. In this view, the optimization problems that are practically feasible are based on inexact quantities that are stochastic in nature. In this paper, we show how probabilistic results, specifically gradient concentration, can be combined with results from inexact optimization to derive sharp test error guarantees. By considering unconstrained objectives, we highlight the implicit regularization properties of optimization for learning.
引用
收藏
页码:265 / 294
页数:30
相关论文
共 50 条
  • [1] From inexact optimization to learning via gradient concentration
    Bernhard Stankewitz
    Nicole Mücke
    Lorenzo Rosasco
    Computational Optimization and Applications, 2023, 84 : 265 - 294
  • [2] Time Varying Optimization via Inexact Proximal Online Gradient Descent
    Dixit, Rishabh
    Bedi, Amrit Singh
    Tripathi, Ruchi
    Rajawat, Ketan
    2018 CONFERENCE RECORD OF 52ND ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2018, : 759 - 763
  • [3] AN INEXACT NONMONOTONE PROJECTED GRADIENT METHOD FOR CONSTRAINED MULTIOBJECTIVE OPTIMIZATION
    Zhao, Xiaopeng
    Zhang, Huijie
    Yao, Yonghong
    JOURNAL OF NONLINEAR AND VARIATIONAL ANALYSIS, 2024, 8 (04): : 517 - 531
  • [4] Inexact Reduced Gradient Methods in Nonconvex Optimization
    Khanh, Pham Duy
    Mordukhovich, Boris S.
    Tran, Dat Ba
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2024, 203 (03) : 2138 - 2178
  • [5] Stochastic Optimization for Nonconvex Problem With Inexact Hessian Matrix, Gradient, and Function
    Liu, Liu
    Liu, Xuanqing
    Hsieh, Cho-Jui
    Tao, Dacheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1651 - 1663
  • [6] An Inexact Fenchel Dual Gradient Algorithm for Distributed Optimization
    Wang, He
    Lu, Jie
    2020 IEEE 16TH INTERNATIONAL CONFERENCE ON CONTROL & AUTOMATION (ICCA), 2020, : 949 - 954
  • [7] Federated Learning Via Inexact ADMM
    Zhou, Shenglong
    Li, Geoffrey Ye
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 9699 - 9708
  • [8] Distributed and Inexact Proximal Gradient Method for Online Convex Optimization
    Bastianello, Nicola
    Dall'Anese, Emiliano
    2021 EUROPEAN CONTROL CONFERENCE (ECC), 2021, : 2432 - 2437
  • [9] Multi-Agent Distributed Optimization via Inexact Consensus ADMM
    Chang, Tsung-Hui
    Hong, Mingyi
    Wang, Xiangfeng
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (02) : 482 - 497
  • [10] An inexact Riemannian proximal gradient method
    Huang, Wen
    Wei, Ke
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2023, 85 (01) : 1 - 32