Gen-NeRF: Efficient and Generalizable Neural Radiance Fields via Algorithm-Hardware Co-Design

被引:4
作者
Fu, Yonggan [1 ]
Ye, Zhifan [1 ]
Yuan, Jiayi [2 ]
Zhang, Shunyao [2 ]
Li, Sixu [1 ]
You, Haoran [1 ]
Lin, Yingyan [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Rice Univ, Houston, TX USA
来源
PROCEEDINGS OF THE 2023 THE 50TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, ISCA 2023 | 2023年
基金
美国国家科学基金会;
关键词
Neural Radiance Field; Hardware Accelerator; VR;
D O I
10.1145/3579371.3589109
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Novel view synthesis is an essential functionality for enabling immersive experiences in various Augmented- and Virtual-Reality (AR/VR) applications, for which Neural Radiance Field (NeRF) has emerged as the state-of-the-art (SOTA) technique. In particular, generalizable NeRFs have gained increasing popularity thanks to their cross-scene generalization capability, which enables NeRFs to be instantly serviceable for new scenes without per-scene training. Despite their promise, generalizable NeRFs aggravate the prohibitive complexity of NeRFs due to their required extra memory accesses needed to acquire scene features, causing NeRFs' ray marching process to be memory-bounded. To tackle this dilemma, existing sparsity-exploitation techniques for NeRFs fall short, because they require knowledge of the sparsity distribution of the target 3D scene which is unknown when generalizing NeRFs to a new scene. To this end, we propose Gen-NeRF, an algorithm-hardware co-design framework dedicated to generalizable NeRF acceleration, which aims to win both rendering efficiency and generalization capability in NeRFs. To the best of our knowledge, Gen-NeRF is the first to enable real-time generalizable NeRFs, demonstrating a promising NeRF solution for next-generation AR/VR devices. On the algorithm side, Gen-NeRF integrates a coarse-then-focus sampling strategy, leveraging the fact that different regions of a 3D scene contribute differently to the rendered pixels depending on where the objects are located in the scene, to enable sparse yet effective s ampling. In addition, Gen-NeRF replaces the ray transformer, which is generally included in SOTA generalizable NeRFs to enhance density estimation, with a novel Ray-Mixer module to reduce workload heterogeneity. On the hardware side, Gen-NeRF highlights an accelerator micro-architecture dedicated to accelerating the resulting model workloads from our Gen-NeRF algorithm to maximize the data reuse opportunities among different rays by making use of their epipolar geometric relationship. Furthermore, our Gen-NeRF accelerator features a customized dataflow to enhance data locality during point-to-hardware mapping and an optimized scene feature storage strategy to minimize memory bank conflicts across camera rays of NeRFs. Extensive experiments validate the effectiveness of our proposed Gen-NeRF framework in enabling real-time and generalizable novel view synthesis.
引用
收藏
页码:1038 / 1049
页数:12
相关论文
共 50 条
  • [1] Bi S, 2020, Arxiv, DOI [arXiv:2008.03824, 10.48550/arXiv.2008.03824]
  • [2] A framework for physiological indicators of flow in VR games: construction and preliminary evaluation
    Bian, Yulong
    Yang, Chenglei
    Gao, Fengqiang
    Li, Huiyu
    Zhou, Shisheng
    Li, Hanchao
    Sun, Xiaowen
    Meng, Xiangxu
    [J]. PERSONAL AND UBIQUITOUS COMPUTING, 2016, 20 (05) : 821 - 832
  • [3] pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis
    Chan, Eric R.
    Monteiro, Marco
    Kellnhofer, Petr
    Wu, Jiajun
    Wetzstein, Gordon
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5795 - 5805
  • [4] Chen AP, 2022, Arxiv, DOI arXiv:2203.09517
  • [5] MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo
    Chen, Anpei
    Xu, Zexiang
    Zhao, Fuqiang
    Zhang, Xiaoshuai
    Xiang, Fanbo
    Yu, Jingyi
    Su, Hao
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14104 - 14113
  • [6] Go boldly! Explore augmented reality (AR), virtual reality (VR), and mixed reality (MR) for business
    Farshid, Mana
    Paschen, Jeannette
    Eriksson, Theresa
    Kietzmann, Jan
    [J]. BUSINESS HORIZONS, 2018, 61 (05) : 657 - 663
  • [7] Fassi Francesco, 2016, INT C AUGM REAL VIRT
  • [8] DeepView: View synthesis with learned gradient descent
    Flynn, John
    Broxton, Michael
    Debevec, Paul
    DuVall, Matthew
    Fyffe, Graham
    Overbeck, Ryan
    Snavely, Noah
    Tucker, Richard
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2362 - 2371
  • [9] Local Deep Implicit Functions for 3D Shape
    Genova, Kyle
    Cole, Forrester
    Sud, Avneesh
    Sarna, Aaron
    Funkhouser, Thomas
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4856 - 4865
  • [10] Google Research, Google Scanned Objects