Linear View Synthesis Using a Dimensionality Gap Light Field Prior

被引:130
作者
Levin, Anat [1 ]
Durand, Fredo [2 ]
机构
[1] Weizmann Inst Sci, IL-76100 Rehovot, Israel
[2] MIT, CSAIL, Cambridge, MA 02139 USA
来源
2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2010年
关键词
D O I
10.1109/CVPR.2010.5539854
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Acquiring and representing the 4D space of rays in the world (the light field) is important for many computer vision and graphics applications. Yet, light field acquisition is costly due to their high dimensionality. Existing approaches either capture the 4D space explicitly, or involve an error-sensitive depth estimation process. This paper argues that the fundamental difference between different acquisition and rendering techniques is a difference between prior assumptions on the light field. We use the previously reported dimensionality gap in the 4D light field spectrum to propose a new light field prior. The new prior is a Gaussian assigning a non-zero variance mostly to a 3D subset of entries. Since there is only a low-dimensional subset of entries with non-zero variance, we can reduce the complexity of the acquisition process and render the 4D light field from 3D measurement sets. Moreover, the Gaussian nature of the prior leads to linear and depth invariant reconstruction algorithms. We use the new prior to render the 4D light field from a 3D focal stack sequence and to interpolate sparse directional samples and aliased spatial measurements. In all cases the algorithm reduces to a simple spatially invariant deconvolution which does not involve depth estimation.
引用
收藏
页码:1831 / 1838
页数:8
相关论文
共 21 条