Learning to simultaneously enhance field of view and dynamic range for light field imaging

被引:9
作者
Chen, Yeyao [1 ]
Jiang, Gangyi [1 ]
Yu, Mei [1 ]
Xu, Haiyong [1 ]
Ho, Yo-Sung [2 ]
机构
[1] Ningbo Univ, Fac Informat Sci & Engn, Ningbo 315211, Peoples R China
[2] Gwangju Inst Sci & Technol, Sch Elect Engn & Comp Sci, Gwangju 61005, South Korea
关键词
Light field; Wide field of view; High dynamic range; Image fusion; Unsupervised learning; DEEP HOMOGRAPHY;
D O I
10.1016/j.inffus.2022.10.021
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Light field (LF) imaging, which simultaneously captures the intensity and direction information of light rays, enabling many vision applications, has received widespread attention. However, limited by the optical structure of the LF camera, the acquired LF images usually suffer from narrow field of view (FOV) and low dynamic range. To address these problems, this paper proposes an unsupervised wide-FOV high dynamic range (HDR) LF im-aging method, which can effectively reconstruct a wide-FOV HDR LF image from a set of source LF images captured from different perspectives and simultaneously with different exposures. Specifically, the proposed method first exploits tensor decomposition to obtain a compact representation of high-dimensional LF image, so as to enable a computationally efficient 2D neural network for LF registration. Subsequently, an exposure restoration network is constructed to recover the multi-exposure information of the registered non-overlapping regions, which is then linearly fused with the previous registered results to generate the stitched wide-FOV multi -exposure LF images. Finally, an HDR LF blending network with two ingenious unsupervised losses is designed to blend the stitching results to generate the desired wide-FOV HDR LF image. Experimental results show that the proposed method achieves superior performance compared with the state-of-the-art methods in both qualitative and quantitative evaluation. Moreover, a series of ablation studies effectively validate the performance of each module in the proposed method.
引用
收藏
页码:215 / 229
页数:15
相关论文
共 65 条
  • [1] A SPATIO-ANGULAR FILTER FOR HIGH QUALITY SPARSE LIGHT FIELD REFOCUSING
    Alain, Martin
    Smolic, Aljosa
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2021,
  • [2] A Fast Approach for No-Reference Image Sharpness Assessment Based on Maximum Local Variation
    Bahrami, Khosro
    Kot, Alex C.
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2014, 21 (06) : 751 - 755
  • [3] Rendering Gigaray Light Fields
    Birklbauer, C.
    Opelt, S.
    Bimber, O.
    [J]. COMPUTER GRAPHICS FORUM, 2013, 32 (02) : 469 - 478
  • [4] Automatic panoramic image stitching using invariant features
    Brown, Matthew
    Lowe, David G.
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2007, 74 (01) : 59 - 73
  • [5] Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images
    Cai, Jianrui
    Gu, Shuhang
    Zhang, Lei
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (04) : 2049 - 2062
  • [6] Deep Light Field Spatial Super-Resolution Using Heterogeneous Imaging
    Chen, Yeyao
    Jiang, Gangyi
    Yu, Mei
    Xu, Haiyong
    Ho, Yo-Sung
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2023, 29 (10) : 4183 - 4197
  • [7] Learning Stereo High Dynamic Range Imaging From A Pair of Cameras With Different Exposure Parameters
    Chen, Yeyao
    Jiang, Gangyi
    Yu, Mei
    Yang, You
    Ho, Yo-Sung
    [J]. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2020, 6 : 1044 - 1058
  • [8] Edge-guided Composition Network for Image Stitching
    Dai, Qinyan
    Fang, Faming
    Li, Juncheng
    Zhang, Guixu
    Zhou, Aimin
    [J]. PATTERN RECOGNITION, 2021, 118
  • [9] Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras
    Dansereau, Donald G.
    Pizarro, Oscar
    Williams, Stefan B.
    [J]. 2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 1027 - 1034
  • [10] Gao JH, 2011, PROC CVPR IEEE, P49, DOI 10.1109/CVPR.2011.5995433