Light-Field Raw Data Synthesis From RGB-D Images: Pushing to the Extreme

被引:3
作者
Wu, Yiqun [1 ,2 ]
Liu, Shuaicheng [1 ]
Sun, Chao [1 ]
Zeng, Bing [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
[2] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
关键词
Spatial resolution; Cameras; Lenses; Rendering (computer graphics); Image reconstruction; Arrays; Light-field; RGB-D images; micro lens; refocusing; sub-aperture;
D O I
10.1109/ACCESS.2020.2974063
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Light-field raw data captured by a state-of-the-art light-field camera is limited in its spatial and angular resolutions due to the cameras optical hardware. In this paper, we propose an all-software algorithm to synthesize light-field raw data from a single RGB-D input image, which is driven largely by the need in the research area of light-field data compression. Our synthesis algorithm consists of three key steps: (1) each pixel of the input image is regarded as a spot lighting source that emits directional light rays with an equal strength; (2) the optical path of each directional light ray through the cameras main lens as well as the corresponding micro lens is considered as accurately as possible; and (3) the occlusion of light rays among objects at different distances within the input image is handled with the depth information. The spatial and angular resolutions of our synthesized light-field data can be scaled up when the input RGB-D image has a higher and higher spatial resolution. Meanwhile, for a given input image with a fixed size, we pay a special attention to what would be the extreme we can push the parameters involved in our synthesis algorithm, such as the number of rays emitted from each pixel, the number of micro lenses, and the number of sensors associated with each micro lens. The usefulness of our synthesized data is validated by refocusing, all-in-focus, and sub-aperture reconstructions. In particular, all-in-focus images are evaluated objectively by computing the structural similarity (SSIM) index, which allows us to reach the goal of pushing to the extreme through selecting various parameters mentioned above.
引用
收藏
页码:33391 / 33405
页数:15
相关论文
共 41 条
[1]   Interactive digital photomontage [J].
Agarwala, A ;
Dontcheva, M ;
Agrawala, M ;
Drucker, S ;
Colburn, A ;
Curless, B ;
Salesin, D ;
Cohen, M .
ACM TRANSACTIONS ON GRAPHICS, 2004, 23 (03) :294-302
[2]  
[Anonymous], 2018, P EUR C COMP VIS ECC
[3]  
[Anonymous], 2005, Comput. Sci. Tech. Rep.
[4]  
[Anonymous], 1997, THESIS
[5]  
[Anonymous], 2017, International Conference on Computer Vision (ICCV)
[6]   A database and evaluation methodology for optical flow [J].
Baker, Simon ;
Scharstein, Daniel ;
Lewis, J. P. ;
Roth, Stefan ;
Black, Michael J. ;
Szeliski, Richard .
2007 IEEE 11TH INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS 1-6, 2007, :588-595
[7]  
Bergen J. R., 1991, COMPUTATIONAL MODELS, V1, P8
[8]  
Buehler C, 2001, COMP GRAPH, P425, DOI 10.1145/383259.383309
[9]  
CARLBOM I, 1978, COMPUT SURV, V10, P465, DOI 10.1145/356744.356750
[10]   Depth Synthesis and Local Warps for Plausible Image-Based Navigation [J].
Chaurasia, Gaurav ;
Duchene, Sylvain ;
Sorkine-Hornung, Olga ;
Drettakis, George .
ACM TRANSACTIONS ON GRAPHICS, 2013, 32 (03)