Online Learning of Neural Surface Light Fields Alongside Real-Time Incremental 3D Reconstruction

被引:1
作者
Yuan, Yijun [1 ]
Nuechter, Andreas [1 ]
机构
[1] Julius Maximilians Univ, Informat 17, D-97070 Wurzburg, Germany
关键词
Three-dimensional displays; Surface reconstruction; Rendering (computer graphics); Training; Real-time systems; Robots; Image reconstruction; Mapping; SLAM;
D O I
10.1109/LRA.2023.3273516
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Immersive novel view generation is an important technology in the field of graphics and has recently also received attention for operator-based human-robot interaction. However, the involved training is time-consuming, and thus the current test scope is majorly on object capturing. This limits the usage of related models in the robotics community for 3D reconstruction since robots (1) usually only capture a very small range of view directions to surfaces that cause arbitrary predictions on unseen, novel direction, (2) requires real-time algorithms, and (3) work with growing scenes, e.g., in robotic exploration. The letter proposes a novel Neural Surface Light Fields model that copes with the small range of view directions while producing a good result in unseen directions. Exploiting recent encoding techniques, the training of our model is highly efficient. In addition, we design Multiple Asynchronous Neural Agents (MANA), a universal framework to learn each small region in parallel for large-scale growing scenes. Our model learns online the Neural Surface Light Fields (NSLF) aside from real-time 3D reconstruction with a sequential data stream as the shared input. In addition to online training, our model also provides real-time rendering after completing the data stream for visualization. We implement experiments using well-known RGBD indoor datasets, showing the high flexibility to embed our model into real-time 3D reconstruction and demonstrating high-fidelity view synthesis for these scenes.
引用
收藏
页码:3843 / 3850
页数:8
相关论文
共 33 条
  • [1] Azinovie D., 2021, P IEEE CVF C COMP VI, P6290
  • [2] Bylow E., 2013, ROBOTICS SCI SYSTEMS
  • [3] TensoRF: Tensorial Radiance Fields
    Chen, Anpei
    Xu, Zexiang
    Geiger, Andreas
    Yu, Jingyi
    Su, Hao
    [J]. COMPUTER VISION - ECCV 2022, PT XXXII, 2022, 13692 : 333 - 350
  • [4] MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo
    Chen, Anpei
    Xu, Zexiang
    Zhao, Fuqiang
    Zhang, Xiaoshuai
    Xiang, Fanbo
    Yu, Jingyi
    Su, Hao
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14104 - 14113
  • [5] Deep Surface Light Fields
    Chen, Anpei
    Wu, Minye
    Zhang, Yingliang
    Li, Nianyi
    Lu, Jie
    Gao, Shenghua
    Yu, Jingyi
    [J]. PROCEEDINGS OF THE ACM ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES, 2018, 1 (01)
  • [6] Deng K., 2021, PROC IEEECVF C COMPU, P12882
  • [7] Handa A, 2014, IEEE INT CONF ROBOT, P1524, DOI 10.1109/ICRA.2014.6907054
  • [8] Baking Neural Radiance Fields for Real-Time View Synthesis
    Hedman, Peter
    Srinivasan, Pratul P.
    Mildenhall, Ben
    Barron, Jonathan T.
    Debevec, Paul
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 5855 - 5864
  • [9] DI-Fusion: Online Implicit 3D Reconstruction with Deep Priors
    Huang, Jiahui
    Huang, Shi-Sheng
    Song, Haoxuan
    Hu, Shi-Min
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 8928 - 8937
  • [10] AutoInt: Automatic Integration for Fast Neural Volume Rendering
    Lindell, David B.
    Martel, Julien N. P.
    Wetzstein, Gordon
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 14551 - 14560