Gen-NeRF: Efficient and Generalizable Neural Radiance Fields via Algorithm-Hardware Co-Design

被引:4
作者
Fu, Yonggan [1 ]
Ye, Zhifan [1 ]
Yuan, Jiayi [2 ]
Zhang, Shunyao [2 ]
Li, Sixu [1 ]
You, Haoran [1 ]
Lin, Yingyan [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Rice Univ, Houston, TX USA
来源
PROCEEDINGS OF THE 2023 THE 50TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, ISCA 2023 | 2023年
基金
美国国家科学基金会;
关键词
Neural Radiance Field; Hardware Accelerator; VR;
D O I
10.1145/3579371.3589109
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Novel view synthesis is an essential functionality for enabling immersive experiences in various Augmented- and Virtual-Reality (AR/VR) applications, for which Neural Radiance Field (NeRF) has emerged as the state-of-the-art (SOTA) technique. In particular, generalizable NeRFs have gained increasing popularity thanks to their cross-scene generalization capability, which enables NeRFs to be instantly serviceable for new scenes without per-scene training. Despite their promise, generalizable NeRFs aggravate the prohibitive complexity of NeRFs due to their required extra memory accesses needed to acquire scene features, causing NeRFs' ray marching process to be memory-bounded. To tackle this dilemma, existing sparsity-exploitation techniques for NeRFs fall short, because they require knowledge of the sparsity distribution of the target 3D scene which is unknown when generalizing NeRFs to a new scene. To this end, we propose Gen-NeRF, an algorithm-hardware co-design framework dedicated to generalizable NeRF acceleration, which aims to win both rendering efficiency and generalization capability in NeRFs. To the best of our knowledge, Gen-NeRF is the first to enable real-time generalizable NeRFs, demonstrating a promising NeRF solution for next-generation AR/VR devices. On the algorithm side, Gen-NeRF integrates a coarse-then-focus sampling strategy, leveraging the fact that different regions of a 3D scene contribute differently to the rendered pixels depending on where the objects are located in the scene, to enable sparse yet effective s ampling. In addition, Gen-NeRF replaces the ray transformer, which is generally included in SOTA generalizable NeRFs to enhance density estimation, with a novel Ray-Mixer module to reduce workload heterogeneity. On the hardware side, Gen-NeRF highlights an accelerator micro-architecture dedicated to accelerating the resulting model workloads from our Gen-NeRF algorithm to maximize the data reuse opportunities among different rays by making use of their epipolar geometric relationship. Furthermore, our Gen-NeRF accelerator features a customized dataflow to enhance data locality during point-to-hardware mapping and an optimized scene feature storage strategy to minimize memory bank conflicts across camera rays of NeRFs. Extensive experiments validate the effectiveness of our proposed Gen-NeRF framework in enabling real-time and generalizable novel view synthesis.
引用
收藏
页码:1038 / 1049
页数:12
相关论文
共 50 条
  • [31] ICARUS: A Specialized Architecture for Neural Radiance Fields Rendering
    Rao, Chaolin
    Yu, Huangjie
    Wan, Haochuan
    Zhou, Jindong
    Zheng, Yueyang
    Wu, Minye
    Ma, Yu
    Chen, Anpei
    Yuan, Binzhe
    Zhou, Pingqiang
    Lou, Xin
    Yu, Jingyi
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2022, 41 (06):
  • [32] Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction
    Reizenstein, Jeremy
    Shapovalov, Roman
    Henzler, Philipp
    Sbordone, Luca
    Labatut, Patrick
    Novotny, David
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10881 - 10891
  • [33] Riegler Gernot, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12364), P623, DOI 10.1007/978-3-030-58529-7_37
  • [34] Schwarz K, 2020, ADV NEUR IN, V33
  • [35] Real-time digital holographic microscopy using the graphic processing unit
    Shimobaba, Tomoyoshi
    Sato, Yoshikuni
    Miura, Junya
    Takenouchi, Mai
    Ito, Tomoyoshi
    [J]. OPTICS EXPRESS, 2008, 16 (16): : 11776 - 11781
  • [36] DeepVoxels: Learning Persistent 3D Feature Embeddings
    Sitzmann, Vincent
    Thies, Justus
    Heide, Felix
    Niessner, Matthias
    Wetzstein, Gordon
    Zollhofer, Michael
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2432 - 2441
  • [37] Sobol IM, 2018, PRIMER MONTE CARLO M
  • [38] NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis
    Srinivasan, Pratul P.
    Deng, Boyang
    Zhang, Xiuming
    Tancik, Matthew
    Mildenhall, Ben
    Barron, Jonathan T.
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7491 - 7500
  • [39] Tolstikhin I, 2021, ADV NEUR IN, V34
  • [40] Varma MT, 2022, Arxiv, DOI arXiv:2207.13298