Applying statistical learning theory to deep learning

被引:0
作者
Gerbelot, Cedric [1 ]
Karagulyan, Avetik [2 ]
Karp, Stefani [3 ,4 ]
Ravichandran, Kavya [5 ]
Stern, Menachem [6 ]
Srebro, Nathan [5 ]
机构
[1] Courant Inst Math Sci, New York, NY 10012 USA
[2] King Abdullah Univ Sci & Technol, Thuwal 23955, Saudi Arabia
[3] Carnegie Mellon Univ, Pittsburgh, PA USA
[4] Google Res, New York, NY USA
[5] Toyota Technol Inst, Chicago, IL 60637 USA
[6] Univ Penn, Dept Phys & Astron, Philadelphia, PA USA
来源
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT | 2024年 / 2024卷 / 10期
关键词
machine learning; learning theory; deep learning; analysis of algorithms; 1ST-ORDER METHODS; BOUNDS;
D O I
10.1088/1742-5468/ad3a5f
中图分类号
O3 [力学];
学科分类号
08 ; 0801 ;
摘要
Although statistical learning theory provides a robust framework to understand supervised learning, many theoretical aspects of deep learning remain unclear;in particular, how different architectures may lead to inductive bias when trained using gradient-based methods. The goal of these lectures is to provide an overview of some of the main questions that arise when attempting to understand deep learning from a learning theory perspective. After a brief reminder on statistical learning theory and stochastic optimization, we discuss implicit bias in the context of benign overfitting. We then move to a general description of the mirror descent algorithm, showing how we may go back and forth between a parameter space and the corresponding function space for a given learning problem, as well as how the geometry of the learning problem may be represented by a metric tensor. Building on this framework, we provide a detailed study of the implicit bias of gradient descent on linear diagonal networks for various regression tasks, showing how the loss function, scale of parameters at initialization and depth of the network may lead to various forms of implicit bias; in particular, transitioning between kernel and feature learning regimes.
引用
收藏
页数:64
相关论文
共 54 条
[1]  
Amari S.I., 1985, Differential-geometrical methods in statistics
[2]   Convex Optimization: Algorithms and Complexity [J].
不详 .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2015, 8 (3-4) :232-+
[3]  
Arora S, 2009, COMPUTATIONAL COMPLEXITY: A MODERN APPROACH, P1, DOI 10.1017/CBO9780511804090
[4]   UNIVERSAL APPROXIMATION BOUNDS FOR SUPERPOSITIONS OF A SIGMOIDAL FUNCTION [J].
BARRON, AR .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1993, 39 (03) :930-945
[5]   Benign overfitting in linear regression [J].
Bartlett, Peter L. ;
Long, Philip M. ;
Lugosi, Gabor ;
Tsigler, Alexander .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2020, 117 (48) :30063-30070
[6]  
Bartlett PL, 2019, J MACH LEARN RES, V20, P1
[7]  
Bartlett PL, 1997, ADV NEUR IN, V9, P134
[8]   A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications [J].
Bauschke, Heinz H. ;
Bolte, Jerome ;
Teboulle, Marc .
MATHEMATICS OF OPERATIONS RESEARCH, 2017, 42 (02) :330-348
[9]   Reconciling modern machine-learning practice and the classical bias-variance trade-off [J].
Belkin, Mikhail ;
Hsu, Daniel ;
Ma, Siyuan ;
Mandal, Soumik .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2019, 116 (32) :15849-15854
[10]  
Belkin Mikhail, 2018, INT C MACHINE LEARNI, P541