Enhancing View Synthesis with Depth-Guided Neural Radiance Fields and Improved Depth Completion

被引:1
|
作者
Wang, Bojun [1 ]
Zhang, Danhong [1 ]
Su, Yixin [1 ]
Zhang, Huajun [1 ]
机构
[1] Wuhan Univ Technol, Sch Automat, Wuhan 430070, Peoples R China
关键词
neural radiance fields; volume rendering; view synthesis; image-based rendering; depth priors; rendering accelerations;
D O I
10.3390/s24061919
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Neural radiance fields (NeRFs) leverage a neural representation to encode scenes, obtaining photorealistic rendering of novel views. However, NeRF has notable limitations. A significant drawback is that it does not capture surface geometry and only renders the object surface colors. Furthermore, the training of NeRF is exceedingly time-consuming. We propose Depth-NeRF as a solution to these issues. Specifically, our approach employs a fast depth completion algorithm to denoise and complete the depth maps generated by RGB-D cameras. These improved depth maps guide the sampling points of NeRF to be distributed closer to the scene's surface, benefiting from dense depth information. Furthermore, we have optimized the network structure of NeRF and integrated depth information to constrain the optimization process, ensuring that the termination distribution of the ray is consistent with the scene's geometry. Compared to NeRF, our method accelerates the training speed by 18%, and the rendered images achieve a higher PSNR than those obtained by mainstream methods. Additionally, there is a significant reduction in RMSE between the rendered scene depth and the ground truth depth, which indicates that our method can better capture the geometric information of the scene. With these improvements, we can train the NeRF model more efficiently and achieve more accurate rendering results.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Depth-Guided Optimization of Neural Radiance Fields for Indoor Multi-View Stereo
    Wei, Yi
    Liu, Shaohui
    Zhou, Jie
    Lu, Jiwen
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (09) : 10835 - 10849
  • [2] Depth Guided Fast Rendering Neural Radiance Fields
    Zhang, Tao
    Wang, Haoqian
    2024 3RD CONFERENCE ON FULLY ACTUATED SYSTEM THEORY AND APPLICATIONS, FASTA 2024, 2024, : 860 - 867
  • [3] Depth-guided view synthesis for light field reconstruction from a single image
    Zhou, Wenhui
    Liu, Gaomin
    Shi, Jiangwei
    Zhang, Hua
    Dai, Guojun
    IMAGE AND VISION COMPUTING, 2020, 95
  • [4] Novel View Synthesis with Depth Priors Using Neural Radiance Fields and CycleGAN with Attention Transformer
    Qin, Yuxin
    Li, Xinlin
    Zu, Linan
    Jin, Ming Liang
    SYMMETRY-BASEL, 2025, 17 (01):
  • [5] Multi-View Stereo and Depth Priors Guided NeRF for View Synthesis
    Deng, Wang
    Zhang, Xuetao
    Guo, Yu
    Lu, Zheng
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 3922 - 3928
  • [6] DEPTH-BASED IMAGE COMPLETION FOR VIEW SYNTHESIS
    Gautier, Josselin
    Le Meur, Olivier
    Guillemot, Christine
    2011 3DTV CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO (3DTV-CON), 2011,
  • [7] STs-NeRF: Novel View Synthesis of Space Targets Based on Improved Neural Radiance Fields
    Ma, Kaidi
    Liu, Peixun
    Sun, Haijiang
    Teng, Jiawei
    REMOTE SENSING, 2024, 16 (13)
  • [8] XRNeRF: View-guided Neural Radiance Fields for Occlusion Removal
    Li, Aoxue
    Zeng, Xinhua
    Du, Yunlong
    Pang, Chengxin
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 3200 - 3205
  • [9] Dip-NeRF: Depth-Based Anti-Aliased Neural Radiance Fields
    Qin, Shihao
    Xiao, Jiangjian
    Ge, Jianfei
    ELECTRONICS, 2024, 13 (08)
  • [10] DA4NeRF: Depth-aware Augmentation technique for Neural Radiance Fields
    Khosroshahi, Hamed Razavi
    Sancho, Jaime
    Bang, Gun
    Lafruit, Gauthier
    Juarez, Eduardo
    Teratani, Mehrdad
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2025, 107