Solving Inverse Problems via Auto-Encoders

被引:21
作者
Peng, Pei [1 ]
Jalali, Shirin [2 ]
Yuan, Xin [2 ]
机构
[1] Rutgers State Univ, Dept Elect & Comp Engn, Piscataway, NJ 08854 USA
[2] Nokia Bell Labs, Murray Hill, NJ 07974 USA
来源
IEEE JOURNAL ON SELECTED AREAS IN INFORMATION THEORY | 2020年 / 1卷 / 01期
关键词
Compressed sensing; generative models; inverse problems; auto-encoders; deep learning; NETWORKS;
D O I
10.1109/JSAIT.2020.2983643
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Compressed sensing (CS) is about recovering a structured signal from its under-determined linear measurements. Starting from sparsity, recovery methods have steadily moved towards more complex structures. Emerging machine learning tools such as generative functions that are based on neural networks are able to learn general complex structures from training data. This makes them potentially powerful tools for designing CS algorithms. Consider a desired class of signals Q, Q subset of R-n, and a corresponding generative function g : U-k -> R-n, U subset of R, such that sup(x is an element of Q) min(u is an element of Uk) 1/root n parallel to g(u) - x parallel to <= delta. A recovery method based on g seeks g(u) with minimum measurement error. In this paper, the performance of such a recovery method is studied, under both noisy and noiseless measurements. In the noiseless case, roughly speaking, it is proven that, as k and n grow without bound and delta converges to zero, if the number of measurements (m) is larger than the input dimension of the generative model (k), then asymptotically, almost lossless recovery is possible. Furthermore, the performance of an efficient iterative algorithm based on projected gradient descent is studied. In this case, an auto-encoder is used to define and enforce the source structure at the projection step. The auto-encoder is defined by encoder and decoder (generative) functions f : R-n -> U-k and g : U-k -> R-n, respectively. We theoretically prove that, roughly, given m > 40k log 1/delta measurements, such an algorithm converges to the vicinity of the desired result, even in the presence of additive white Gaussian noise. Numerical results exploring the effectiveness of the proposed method are presented.
引用
收藏
页码:312 / 323
页数:12
相关论文
共 31 条
[1]  
BARRON AR, 1994, MACH LEARN, V14, P115, DOI 10.1007/BF00993164
[2]   Iterative hard thresholding for compressed sensing [J].
Blumensath, Thomas ;
Davies, Mike E. .
APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2009, 27 (03) :265-274
[3]  
Bora A, 2017, PR MACH LEARN RES, V70
[4]   AMP-Inspired Deep Networks for Sparse Linear Inverse Problems [J].
Borgerding, Mark ;
Schniter, Philip ;
Rangan, Sundeep .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2017, 65 (16) :4293-4308
[5]   One Network to Solve Them All - Solving Linear Inverse Problems using Deep Projection Models [J].
Chang, J. H. Rick ;
Li, Chun-Liang ;
Poczos, Barnabas ;
Kumar, B. V. K. Vijaya ;
Sankaranarayanan, Aswin C. .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :5889-5898
[6]  
Cybenko G., 1989, Mathematics of Control, Signals, and Systems, V2, P303, DOI 10.1007/BF02551274
[7]   ON THE APPROXIMATE REALIZATION OF CONTINUOUS-MAPPINGS BY NEURAL NETWORKS [J].
FUNAHASHI, K .
NEURAL NETWORKS, 1989, 2 (03) :183-192
[8]  
github.com, Solving Inverse Problems via Auto-Encoders
[9]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
[10]   Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk [J].
Hand, Paul ;
Voroninski, Vladislav .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2020, 66 (01) :401-418