Ambient-NeRF: light train enhancing neural radiance fields in low-light conditions with ambient-illumination

被引:2
作者
Zhang, Peng [1 ]
Hu, Gengsheng [2 ]
Chen, Mei [1 ]
Emam, Mahmoud [2 ,3 ,4 ]
机构
[1] School of Media and Design, Hangzhou DianZi University, BaiYang District No.2 Street, ZheJiang, HangZhou
[2] Shangyu Institute of Science and Engineering Co.Ltd. Hangzhou Dianzi University, CaoE District WuXing West Street, ZheJiang, ShaoXing
[3] School of Computer Science and Technology, Hangzhou DianZi University, BaiYang District No.2 Street, ZheJiang, HangZhou
[4] Faculty of Artificial Intelligence, Menoufia University, Gamal Abd El Nasr street, Monufia Governorate, Shebin El-Koom
关键词
3D reconstruction; Low-light image enhancement; Multi-layer perceptron; NeRF; Neural radiance field;
D O I
10.1007/s11042-024-19699-3
中图分类号
学科分类号
摘要
NeRF can render photorealistic 3D scenes. It is widely used in virtual reality, autonomous driving, game development and other fields, and quickly becomes one of the most popular technologies in the field of 3D reconstruction. NeRF generates a realistic 3D scene by emitting light from the camera’s spatial coordinates and viewpoint, passing through the scene and calculating the view seen from the viewpoint. However, when the brightness of the original input image is low, it is difficult to recover the scene. Inspired by the ambient illumination in the Phong model of computer graphics, it is assumed that the final rendered image is the product of scene color and ambient illumination. In this paper, we employ Multi-Layer Perceptron (MLP) network to train the ambient illumination tensor I, which is multiplied by the color predicted by NeRF to render images with normal illumination. Furthermore, we use tiny-cuda-nn as a backbone network to simplify the proposed network structure and greatly improve the training speed. Additionally, a new loss function is introduced to achieve a better image quality under low illumination conditions. The experimental results demonstrate the efficiency of the proposed method in enhancing low-light scene images compared with other state-of-the-art methods, with an overall average of PSNR: 20.53, SSIM: 0.785, and LPIPS: 0.258 on the LOM dataset. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
引用
收藏
页码:80007 / 80023
页数:16
相关论文
共 35 条
  • [1] Tewari A., Fried O., Thies J., Sitzmann V., Lombardi S., Sunkavalli K., Martin-Brualla R., Simon T., Saragih J., Niessner M., . In: Computer graphics forum, ) State of the Art on Neural Rendering, 39, pp. 701-727, (2020)
  • [2] Deng N., He Z., Ye J., Duinkharjav B., Chakravarthula P., Yang X., Sun Q., Fov-nerf: foveated neural radiance fields for virtual reality, IEEE Trans Visual Comput Graphics, 28, 11, pp. 3854-3864, (2022)
  • [3] Turki H., Ramanan D., Satyanarayanan M., ) Mega-Nerf: Scalable Construction of Large-Scale Nerfs for Virtual Fly-Throughs., pp. 12922-12931, (2022)
  • [4] Fu X., Zhang S., Chen T., Lu Y., Zhu L., Zhou X., Geiger A., Liao Y., Panoptic Nerf: 3D-To-2D Label Transfer for Panoptic Urban Scene Segmentation, pp. 1-11, (2022)
  • [5] Ost J., Mannan F., Thuerey N., Knodt J., Heide F., ) Neural Scene Graphs for Dynamic Scenes, pp. 2856-2865, (2021)
  • [6] Yan Z., Li C., Lee G.H., Nerf-ds: Neural radiance fields for dynamic specular objects, In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8285-8295, (2023)
  • [7] Ma L., Li X., Liao J., Zhang Q., Wang X., Wang J., Sander P.V., Deblur-Nerf: Neural Radiance Fields from Blurry Images., pp. 12861-12870, (2022)
  • [8] Barron J.T., Mildenhall B., Verbin D., Srinivasan P.P., Hedman P., ) Mip-Nerf 360: Unbounded Anti-Aliased Neural Radiance Fields., pp. 5470-5479, (2022)
  • [9] Verbin D., Hedman P., Mildenhall B., Zickler T., Barron J.T., Srinivasan P.P., Ref-Nerf: Structured View-Dependent Appearance for Neural Radiance Fields., pp. 5481-5490, (2022)
  • [10] Martin-Brualla R., Radwan N., Sajjadi M.S., Barron J.T., Dosovitskiy A., Duckworth( D., ) Nerf in the Wild: Neural Radiance Fields for Unconstrained Photo Collections, pp. 7210-7219, (2021)