NeRF-VPT: Learning Novel View Representations with Neural Radiance Fields via View Prompt Tuning

被引:0
|
作者
Chen, Linsheng [1 ]
Wang, Guangrun [2 ]
Yuan, Liuchun [1 ]
Wang, Keze [1 ]
Deng, Ken [1 ]
Torr, Philip H. S. [2 ]
机构
[1] Sun Yat Sen Univ, Guangzhou, Peoples R China
[2] Univ Oxford, Oxford, England
来源
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2 | 2024年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural Radiance Fields (NeRF) have garnered remarkable success in novel view synthesis. Nonetheless, the task of generating high-quality images for novel views persists as a critical challenge. While the existing efforts have exhibited commendable progress, capturing intricate details, enhancing textures, and achieving superior Peak Signal-to-Noise Ratio (PSNR) metrics warrant further focused attention and advancement. In this work, we propose NeRF-VPT, an innovative method for novel view synthesis to address these challenges. Our proposed NeRF-VPT employs a cascading view prompt tuning paradigm, wherein RGB information gained from preceding rendering outcomes serves as instructive visual prompts for subsequent rendering stages, with the aspiration that the prior knowledge embedded in the prompts can facilitate the gradual enhancement of rendered image quality. NeRF-VPT only requires sampling RGB data from previous stage renderings as priors at each training stage, without relying on extra guidance or complex techniques. Thus, our NeRF-VPT is plug-and-play and can be readily integrated into existing methods. By conducting comparative analyses of our NeRF-VPT against several NeRF-based approaches on demanding real-scene benchmarks, such as Realistic Synthetic 360, Real Forward-Facing, Replica dataset, and a user-captured dataset, we substantiate that our NeRF-VPT significantly elevates baseline performance and proficiently generates more high-quality novel view images than all the compared state-of-the-art methods. Furthermore, the cascading learning of NeRF-VPT introduces adaptability to scenarios with sparse inputs, resulting in a significant enhancement of accuracy for sparse-view novel view synthesis. The source code and dataset are available at https://github.com/Freedomcls/NeRF-VPT.
引用
收藏
页码:1156 / 1164
页数:9
相关论文
共 50 条
  • [1] NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
    Mildenhall, Ben
    Srinivasan, Pratul P.
    Tancik, Matthew
    Barron, Jonathan T.
    Ramamoorthi, Ravi
    Ng, Ren
    COMMUNICATIONS OF THE ACM, 2022, 65 (01) : 99 - 106
  • [2] STs-NeRF: Novel View Synthesis of Space Targets Based on Improved Neural Radiance Fields
    Ma, Kaidi
    Liu, Peixun
    Sun, Haijiang
    Teng, Jiawei
    REMOTE SENSING, 2024, 16 (13)
  • [3] SG-NeRF: Sparse-Input Generalized Neural Radiance Fields for Novel View Synthesis
    Xu, Kuo
    Li, Jie
    Li, Zhen-Qiang
    Cao, Yang-Jie
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2024, 39 (04) : 785 - 797
  • [4] FreqSpace-NeRF: A fourier-enhanced Neural Radiance Fields method via dual-domain contrastive learning for novel view synthesis
    Yu, Xiaosheng
    Tian, Xiaolei
    Chen, Jubo
    Wang, Ying
    COMPUTERS & GRAPHICS-UK, 2025, 127
  • [5] Collaborative neural radiance fields for novel view synthesis
    Yuan, Junqing
    Fan, Mengting
    Liu, Zhenyang
    Han, Tongxuan
    Kuang, Zhenzhong
    Pan, Chihao
    Ding, Jiajun
    VISUAL COMPUTER, 2025, 41 (02): : 991 - 1006
  • [6] IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis
    Ye, Weicai
    Chen, Shuo
    Bao, Chong
    Bao, Hujun
    Pollefeys, Marc
    Cui, Zhaopeng
    Zhang, Guofeng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 339 - 351
  • [7] Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields
    Verbin, Dor
    Hedman, Peter
    Mildenhall, Ben
    Zickler, Todd
    Barron, Jonathan T.
    Srinivasan, Pratul P.
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5481 - 5490
  • [8] CP-NeRF: Conditionally Parameterized Neural Radiance Fields for Cross-scene Novel View Synthesis
    He, Hao
    Liang, Yixun
    Xiao, Shishi
    Chen, Jierun
    Chen, Yingcong
    COMPUTER GRAPHICS FORUM, 2023, 42 (07)
  • [9] Progress in Novel View Synthesis Using Neural Radiance Fields
    He Gaoxiang
    Zhu Bin
    Xie Bo
    Chen Yi
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (12)
  • [10] ContraNeRF: Generalizable Neural Radiance Fields for Synthetic-to-real Novel View Synthesis via Contrastive Learning
    Yang, Hao
    Hong, Lanqing
    Li, Aoxue
    Hu, Tianyang
    Li, Zhenguo
    Lee, Gim Hee
    Wang, Liwei
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 16508 - 16517