ProLiF: Progressively-connected Light Field network for efficient view synthesis

被引:1
作者
Wang, Peng [1 ]
Liu, Yuan [1 ]
Lin, Guying [1 ]
Gu, Jiatao [2 ]
Liu, Lingjie [1 ,3 ]
Komura, Taku [1 ]
Wang, Wenping [4 ]
机构
[1] Univ Hong Kong, Hong Kong, Peoples R China
[2] Apple, Cupertino, CA USA
[3] Univ Penn, Philadelphia, PA USA
[4] Texas A&M Univ, PETR 416,400 Bizzell St, College Stn, TX 77843 USA
来源
COMPUTERS & GRAPHICS-UK | 2024年 / 120卷
关键词
Neural rendering; View synthesis; Light field;
D O I
10.1016/j.cag.2024.103913
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
This paper presents a simple yet practical network architecture, ProLiF ( Pro gressively -connected Li ght F ield network), for the efficient differentiable view synthesis of complex forward -facing scenes in both the training and inference stages. The progress of view synthesis has advanced significantly due to the recent Neural Radiance Fields (NeRF). However, when training a NeRF, hundreds of network evaluations are required to synthesize a single pixel color, which is highly consuming of device memory and time. This issue prevents the differentiable rendering of a large patch of pixels in the training stage for semantic -level supervision, which is critical for many practical applications such as robust scene fitting, style transferring, and adversarial training. On the contrary, our proposed simple architecture ProLiF, encodes a two -plane light field, which allows rendering a large batch of rays in one training step for image- or patch -level losses. To keep the multi -view 3D consistency of the neural light field, we propose a progressive training strategy with novel regularization losses. We demonstrate that ProLiF has good compatibility with LPIPS loss to achieve robustness to varying light conditions, and NNFM loss as well as CLIP loss to edit the rendering style of the scene.
引用
收藏
页数:11
相关论文
共 42 条
  • [41] View position prior-supervised light field angular super-resolution network with asymmetric feature extraction and spatial-angular interaction
    Cao, Yanlong
    Wang, Lingyu
    Ren, Lifei
    Yang, Jiangxin
    Cao, Yanpeng
    NEUROCOMPUTING, 2023, 518 : 206 - 218
  • [42] Efficient Reconstruction of All-in-Focus Images Through Shifted Pinholes from Multi-Focus Images for Dense Light Field Synthesis and Rendering
    Kodama, Kazuya
    Kubota, Akira
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (11) : 4407 - 4421