UniVoxel: Fast Inverse Rendering by Unified Voxelization of Scene Representation

被引:0
作者
Wu, Shuang [1 ]
Tang, Songlin [1 ]
Lu, Guangming [1 ]
Liu, Jianzhuang [2 ]
Pei, Wenjie [1 ]
机构
[1] Harbin Inst Technol, Shenzhen, Guangdong, Peoples R China
[2] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen, Guangdong, Peoples R China
来源
COMPUTER VISION - ECCV 2024, PT LXXI | 2025年 / 15129卷
基金
中国国家自然科学基金;
关键词
Inverse Rendering; Neural Rendering; Relighting;
D O I
10.1007/978-3-031-73209-6_21
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Typical inverse rendering methods focus on learning implicit neural scene representations by modeling the geometry, materials and illumination separately, which entails significant computations for optimization. In this work we design a Unified Voxelization framework for explicit learning of scene representations, dubbed UniVoxel, which allows for efficient modeling of the geometry, materials and illumination jointly, thereby accelerating the inverse rendering significantly. To be specific, we propose to encode a scene into a latent volumetric representation, based on which the geometry, materials and illumination can be readily learned via lightweight neural networks in a unified manner. Particularly, an essential design of UniVoxel is that we leverage local Spherical Gaussians to represent the incident light radiance, which enables the seamless integration of modeling illumination into the unified voxelization framework. Such novel design enables our UniVoxel to model the joint effects of direct lighting, indirect lighting and light visibility efficiently without expensive multi-bounce ray tracing. Extensive experiments on multiple benchmarks covering diverse scenes demonstrate that UniVoxel boosts the optimization efficiency significantly compared to other methods, reducing the per-scene training time from hours to 18 min, while achieving favorable reconstruction quality. Code is available at https://github.com/freemantom/UniVoxel.
引用
收藏
页码:360 / 376
页数:17
相关论文
共 46 条
[1]  
Bi S, 2020, Arxiv, DOI [arXiv:2008.03824, 10.48550/arXiv.2008.03824]
[2]   Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [J].
Bi, Sai ;
Xu, Zexiang ;
Sunkavalli, Kalyan ;
Kriegman, David ;
Ramamoorthi, Ravi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :5959-5968
[3]   NeRD: Neural Reflectance Decomposition from Image Collections [J].
Boss, Mark ;
Braun, Raphael ;
Jampani, Varun ;
Barron, Jonathan T. ;
Liu, Ce ;
Lensch, Hendrik P. A. .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :12664-12674
[4]  
Burley B., 2012, ACM SIGGRAPH 2012 CO
[5]   TensoRF: Tensorial Radiance Fields [J].
Chen, Anpei ;
Xu, Zexiang ;
Geiger, Andreas ;
Yu, Jingyi ;
Su, Hao .
COMPUTER VISION - ECCV 2022, PT XXXII, 2022, 13692 :333-350
[6]  
Chen WZ, 2019, ADV NEUR IN, V32
[7]  
Chen Wenzheng, 2021, NEURIPS
[8]   L-Tracing: Fast Light Visibility Estimation on Neural Surfaces by Sphere Tracing [J].
Chen, Ziyu ;
Ding, Chenjing ;
Guo, Jianfei ;
Wang, Dongliang ;
Li, Yikang ;
Xiao, Xuan ;
Wu, Wei ;
Song, Li .
COMPUTER VISION - ECCV 2022, PT XV, 2022, 13675 :217-233
[9]   Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [J].
Fang, Jiemin ;
Yi, Taoran ;
Wang, Xinggang ;
Xie, Lingxi ;
Zhang, Xiaopeng ;
Liu, Wenyu ;
Niessner, Matthias ;
Tian, Qi .
PROCEEDINGS SIGGRAPH ASIA 2022, 2022,
[10]   Plenoxels: Radiance Fields without Neural Networks [J].
Fridovich-Keil, Sara ;
Yu, Alex ;
Tancik, Matthew ;
Chen, Qinhong ;
Recht, Benjamin ;
Kanazawa, Angjoo .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :5491-5500