Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems trained with Gradient Descent

被引:0
作者
Buskulic, Nathan [1 ]
Queau, Yvain [1 ]
Fadili, Jalal [1 ]
机构
[1] Normandie Univ, CNRS, UNICAEN, ENSICAEN,GREYC, Caen, France
来源
32ND EUROPEAN SIGNAL PROCESSING CONFERENCE, EUSIPCO 2024 | 2024年
关键词
Inverse problems; Deep Image/Inverse Prior; Overparametrization; Gradient descent; Unsupervised learning;
D O I
10.23919/EUSIPCO63174.2024.10715276
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Advanced machine learning methods, and more prominently neural networks, have become standard to solve inverse problems over the last years. However, the theoretical recovery guarantees of such methods are still scarce and difficult to achieve. Only recently did unsupervised methods such as the Deep Image Prior (DIP) get equipped with convergence and recovery guarantees for generic loss functions when trained through gradient flow with an appropriate initialization. In this paper, we extend these results by proving that these guarantees hold true when using gradient descent with an appropriately chosen step-size/learning rate. We also show that the discretization only affects the overparametrization bound for a two-layer DIP network by a constant and thus that the different guarantees found for the gradient flow will hold for gradient descent.
引用
收藏
页码:1806 / 1810
页数:5
相关论文
共 15 条
[1]  
Arora S, 2019, PR MACH LEARN RES, V97
[2]   Solving inverse problems using data-driven models [J].
Arridge, Simon ;
Maass, Peter ;
Oktem, Ozan ;
Schonlieb, Carola-Bibiane .
ACTA NUMERICA, 2019, 28 :1-174
[3]   Deep learning: a statistical viewpoint [J].
Bartlett, Peter L. ;
Montanari, Andrea ;
Rakhlin, Alexander .
ACTA NUMERICA, 2021, 30 :87-201
[4]  
Buskulic N, 2024, Arxiv, DOI [arXiv:2309.12128, 10.1007/s10851-024-01191-0, DOI 10.1007/S10851-024-01191-0]
[5]  
Chizat L, 2019, ADV NEUR IN, V32
[6]  
Du S. S., 2019, ICLR
[7]   Mathematical Models of Overparameterized Neural Networks [J].
Fang, Cong ;
Dong, Hanze ;
Zhang, Tong .
PROCEEDINGS OF THE IEEE, 2021, 109 (05) :683-703
[8]   On gradients of functions definable in o-minimal structures [J].
Kurdyka, K .
ANNALES DE L INSTITUT FOURIER, 1998, 48 (03) :769-+
[9]   Loss landscapes and optimization in over-parameterized non-linear systems and neural networks [J].
Liu, Chaoyue ;
Zhu, Libin ;
Belkin, Mikhail .
APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2022, 59 :85-116
[10]  
Lojasiewicz S., 1963, EQUATIONS DERIVEES P, V117, P87, DOI DOI 10.1006/JDEQ.1997.3393