CMC: Few-shot Novel View Synthesis via Cross-view Multiplane Consistency

被引:1
作者
Zhu, Hanxin [1 ]
Chen, Zhibo [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Anhui, Peoples R China
来源
2024 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES, VR 2024 | 2024年
关键词
Neural Radiance Fields-Few-shot view synthesis-Multiplane Images-Cross-view consistency; NEURAL RADIANCE FIELDS; SCENES;
D O I
10.1109/VR58804.2024.00115
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural Radiance Field (NeRF) has shown impressive results in novel view synthesis, particularly in Virtual Reality (VR) and Augmented Reality (AR), thanks to its ability to represent scenes continuously. However, when just a few input view images are available, NeRF tends to overfit the given views and thus make the estimated depths of pixels share almost the same value. Unlike previous methods that conduct regularization by introducing complex priors or additional supervisions, we propose a simple yet effective method that explicitly builds depth-aware consistency across input views to tackle this challenge. Our key insight is that by forcing the same spatial points to be sampled repeatedly in different input views, we are able to strengthen the interactions between views and therefore alleviate the overfitting problem. To achieve this, we build the neural networks on layered representations (i.e., multiplane images), and the sampling point can thus be resampled on multiple discrete planes. Furthermore, to regularize the unseen target views, we constrain the rendered colors and depths from different input views to be the same. Although simple, extensive experiments demonstrate that our proposed method can achieve better synthesis quality over state-of-the-art methods.
引用
收藏
页码:960 / 968
页数:9
相关论文
共 63 条
[1]  
Ahn Y, 2022, Arxiv, DOI arXiv:2211.12758
[2]  
Arpit D, 2017, PR MACH LEARN RES, V70
[3]   OCCAM RAZOR [J].
BLUMER, A ;
EHRENFEUCHT, A ;
HAUSSLER, D ;
WARMUTH, MK .
INFORMATION PROCESSING LETTERS, 1987, 24 (06) :377-380
[4]  
Buehler C, 2001, COMP GRAPH, P425, DOI 10.1145/383259.383309
[5]   Depth Synthesis and Local Warps for Plausible Image-Based Navigation [J].
Chaurasia, Gaurav ;
Duchene, Sylvain ;
Sorkine-Hornung, Olga ;
Drettakis, George .
ACM TRANSACTIONS ON GRAPHICS, 2013, 32 (03)
[6]   MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo [J].
Chen, Anpei ;
Xu, Zexiang ;
Zhao, Fuqiang ;
Zhang, Xiaoshuai ;
Xiang, Fanbo ;
Yu, Jingyi ;
Su, Hao .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :14104-14113
[7]   GeoAug: Data Augmentation for Few-Shot NeRF with Geometry Constraints [J].
Chen, Di ;
Liu, Yu ;
Huang, Lianghua ;
Wang, Bin ;
Pan, Pan .
COMPUTER VISION - ECCV 2022, PT XVII, 2022, 13677 :322-337
[8]   Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes [J].
Chibane, Julian ;
Bansal, Aayush ;
Lazova, Verica ;
Pons-Moll, Gerard .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :7907-7916
[9]  
Debevec P. E., 1996, Computer Graphics Proceedings. SIGGRAPH '96, P11, DOI 10.1145/237170.237191
[10]   Depth-supervised NeRF: Fewer Views and Faster Training for Free [J].
Deng, Kangle ;
Liu, Andrew ;
Zhu, Jun-Yan ;
Ramanan, Deva .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :12872-12881