FoV-NeRF: Foveated Neural Radiance Fields for Virtual Reality

被引:86
作者
Deng, Nianchen [1 ]
He, Zhenyi [2 ]
Ye, Jiannan [1 ]
Duinkharjav, Budmonde [3 ]
Chakravarthula, Praneeth [4 ]
Yang, Xubo [1 ,5 ]
Sun, Qi [6 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Software, Shanghai, Peoples R China
[2] NYU, Dept Comp Sci, New York, NY 10003 USA
[3] NYU, Immers Comp Lab, New York, NY 10003 USA
[4] Univ N Carolina, Comp Sci, Chapel Hill, NC USA
[5] Peng Cheng Lab, Shenzhen, Peoples R China
[6] NYU, Tandon Sch Engn, New York, NY 10003 USA
关键词
Virtual Reality; Gaze-Contingent Graphics; Neural Representation; Foveated Rendering;
D O I
10.1109/TVCG.2022.3203102
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Virtual Reality (VR) is becoming ubiquitous with the rise of consumer displays and commercial VR platforms. Such displays require low latency and high quality rendering of synthetic imagery with reduced compute overheads. Recent advances in neural rendering showed promise of unlocking new possibilities in 3D computer graphics via image-based representations of virtual or physical environments. Specifically, the neural radiance fields (NeRF) demonstrated that photo-realistic quality and continuous view changes of 3D scenes can be achieved without loss of view-dependent effects. While NeRF can significantly benefit rendering for VR applications, it faces unique challenges posed by high field-of-view, high resolution, and stereoscopic/egocentric viewing, typically causing low quality and high latency of the rendered images. In VR, this not only harms the interaction experience but may also cause sickness. To tackle these problems toward six-degrees-of-freedom, egocentric, and stereo NeRF in VR, we present the first gaze-contingent 3D neural representation and view synthesis method. We incorporate the human psychophysics of visual- and stereo-acuity into an egocentric neural representation of 3D scenery. We then jointly optimize the latency/performance and visual quality while mutually bridging human perception and neural scene synthesis to achieve perceptually high-quality immersive interaction. We conducted both objective analysis and subjective studies to evaluate the effectiveness of our approach. We find that our method significantly reduces latency (up to 99% time reduction compared with NeRF) without loss of high-fidelity rendering (perceptually identical to full-resolution ground truth). The presented approach may serve as the first step toward future VR/AR systems that capture, teleport, and visualize remote environments in real-time.
引用
收藏
页码:3854 / 3864
页数:11
相关论文
共 58 条
[1]   Latency Requirements for Foveated Rendering in Virtual Reality [J].
Albert, Rachel ;
Patney, Anjul ;
Luebke, David ;
Kim, Joohwan .
ACM TRANSACTIONS ON APPLIED PERCEPTION, 2017, 14 (04)
[2]   MatryODShka: Real-time 6DoF Video View Synthesis Using Multi-sphere Images [J].
Attal, Benjamin ;
Ling, Selena ;
Gokaslan, Aaron ;
Richardt, Christian ;
Tompkin, James .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :441-459
[3]   SAL: Sign Agnostic Learning of Shapes from Raw Data [J].
Atzmon, Matan ;
Lipman, Yaron .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :2562-2571
[4]   Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields [J].
Barron, Jonathan T. ;
Mildenhall, Ben ;
Tancik, Matthew ;
Hedman, Peter ;
Martin-Brualla, Ricardo ;
Srinivasan, Pratul P. .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :5835-5844
[5]  
Barron Jonathan T, 2022, CVPR
[6]   Immersive Light Field Video with a Layered Mesh Representation [J].
Broxton, Michael ;
Flynn, John ;
Overbeck, Ryan ;
Erickson, Daniel ;
Hedman, Peter ;
Duvall, Matthew ;
Dourgarian, Jason ;
Busch, Jay ;
Whalen, Matt ;
Debevec, Paul .
ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (04)
[7]   DYNAMICS OF ACCOMODATION RESPONSES OF THE HUMAN EYE [J].
CAMPBELL, FW ;
WESTHEIMER, G .
JOURNAL OF PHYSIOLOGY-LONDON, 1960, 151 (02) :285-295
[8]   Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction [J].
Chabra, Rohan ;
Lenssen, Jan E. ;
Ilg, Eddy ;
Schmidt, Tanner ;
Straub, Julian ;
Lovegrove, Steven ;
Newcombe, Richard .
COMPUTER VISION - ECCV 2020, PT XXIX, 2020, 12374 :608-625
[9]   Gaze-Contingent Retinal Speckle Suppression for Perceptually-Matched Foveated Holographic Displays [J].
Chakravarthula, Praneeth ;
Zhang, Zhan ;
Tursun, Okan ;
Didyk, Piotr ;
Sun, Qi ;
Fuchs, Henry .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2021, 27 (11) :4194-4203
[10]   Instant Reality: Gaze-Contingent Perceptual Optimization for 3D Virtual Reality Streaming [J].
Chen, Shaoyu ;
Duinkharjav, Budmonde ;
Sun, Xin ;
Wei, Li-Yi ;
Petrangeli, Stefano ;
Echevarria, Jose ;
Silva, Claudio ;
Sun, Qi .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (05) :2157-2167