Learning to simultaneously enhance field of view and dynamic range for light field imaging

被引:9
作者
Chen, Yeyao [1 ]
Jiang, Gangyi [1 ]
Yu, Mei [1 ]
Xu, Haiyong [1 ]
Ho, Yo-Sung [2 ]
机构
[1] Ningbo Univ, Fac Informat Sci & Engn, Ningbo 315211, Peoples R China
[2] Gwangju Inst Sci & Technol, Sch Elect Engn & Comp Sci, Gwangju 61005, South Korea
关键词
Light field; Wide field of view; High dynamic range; Image fusion; Unsupervised learning; DEEP HOMOGRAPHY;
D O I
10.1016/j.inffus.2022.10.021
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Light field (LF) imaging, which simultaneously captures the intensity and direction information of light rays, enabling many vision applications, has received widespread attention. However, limited by the optical structure of the LF camera, the acquired LF images usually suffer from narrow field of view (FOV) and low dynamic range. To address these problems, this paper proposes an unsupervised wide-FOV high dynamic range (HDR) LF im-aging method, which can effectively reconstruct a wide-FOV HDR LF image from a set of source LF images captured from different perspectives and simultaneously with different exposures. Specifically, the proposed method first exploits tensor decomposition to obtain a compact representation of high-dimensional LF image, so as to enable a computationally efficient 2D neural network for LF registration. Subsequently, an exposure restoration network is constructed to recover the multi-exposure information of the registered non-overlapping regions, which is then linearly fused with the previous registered results to generate the stitched wide-FOV multi -exposure LF images. Finally, an HDR LF blending network with two ingenious unsupervised losses is designed to blend the stitching results to generate the desired wide-FOV HDR LF image. Experimental results show that the proposed method achieves superior performance compared with the state-of-the-art methods in both qualitative and quantitative evaluation. Moreover, a series of ablation studies effectively validate the performance of each module in the proposed method.
引用
收藏
页码:215 / 229
页数:15
相关论文
共 65 条
  • [11] Gul MSK, 2020, IEEE INT CONF MULTI
  • [12] Enhancing Light Fields through Ray-Space Stitching
    Guo, Xinqing
    Yu, Zhan
    Kang, Sing Bing
    Lin, Haiting
    Yu, Jingyi
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2016, 22 (07) : 1852 - 1861
  • [13] Hartley R., 2003, MULTIPLE VIEW GEOMET
  • [14] Image Sharpness Assessment Based on Local Phase Coherence
    Hassen, Rania
    Wang, Zhou
    Salama, Magdy M. A.
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (07) : 2798 - 2810
  • [15] He KM, 2014, LECT NOTES COMPUT SC, V8691, P346, DOI [arXiv:1406.4729, 10.1007/978-3-319-10578-9_23]
  • [16] Hsu P.-H., 2018, P IEEE INT S CIRCUIT, P1
  • [17] Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/CVPR.2018.00745, 10.1109/TPAMI.2019.2913372]
  • [18] Low Bitrate Light Field Compression With Geometry and Content Consistency
    Huang, Xinpeng
    An, Ping
    Chen, Yilei
    Liu, Deyang
    Shen, Liquan
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 152 - 165
  • [19] JADERBERG M, 2015, ADV NEURAL INFORM PR, P2017
  • [20] Leveraging Line-point Consistence to Preserve Structures forWide Parallax Image Stitching
    Jia, Qi
    Li, ZhengJun
    Fan, Xin
    Zhao, Haotian
    Teng, Shiyu
    Ye, Xinchen
    Latecki, Longin Jan
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12181 - 12190