ViCA-NeRF: View-Consistency-Aware 3D Editing of Neural Radiance Fields

被引:0
|
作者
Dong, Jiahua [1 ]
Wang, Yu-Xiong [1 ]
机构
[1] Univ Illinois, Champaign, IL 61820 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce ViCA-NeRF, the first view-consistency-aware method for 3D editing with text instructions. In addition to the implicit neural radiance field (NeRF) modeling, our key insight is to exploit two sources of regularization that explicitly propagate the editing information across different views, thus ensuring multi-view consistency. For geometric regularization, we leverage the depth information derived from NeRF to establish image correspondences between different views. For learned regularization, we align the latent codes in the 2D diffusion model between edited and unedited images, enabling us to edit key views and propagate the update throughout the entire scene. Incorporating these two strategies, our ViCA-NeRF operates in two stages. In the initial stage, we blend edits from different views to create a preliminary 3D edit. This is followed by a second stage of NeRF training, dedicated to further refining the scene's appearance. Experimental results demonstrate that ViCA-NeRF provides more flexible, efficient (3 times faster) editing with higher levels of consistency and details, compared with the state of the art. Our code is available at: https://github.com/Dongjiahua/VICA-NeRF.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] GazeNeRF: 3D-Aware Gaze Redirection with Neural Radiance Fields
    Ruzzi, Alessandro
    Shi, Xiangwei
    Wang, Xi
    Li, Gengyan
    De Mello, Shalini
    Chang, Hyung Jin
    Zhang, Xucong
    Hilliges, Otmar
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 9676 - 9685
  • [22] NeRF-VPT: Learning Novel View Representations with Neural Radiance Fields via View Prompt Tuning
    Chen, Linsheng
    Wang, Guangrun
    Yuan, Liuchun
    Wang, Keze
    Deng, Ken
    Torr, Philip H. S.
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 1156 - 1164
  • [23] DEFORMTOON3D: Deformable Neural Radiance Fields for 3D Toonification
    Zhang, Junzhe
    Lan, Yushi
    Yang, Shuai
    Hong, Fangzhou
    Wang, Quan
    Yeo, Chai Kiat
    Liu, Ziwei
    Loy, Chen Change
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 9110 - 9120
  • [24] Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions
    Haque, Ayaan
    Tancik, Matthew
    Efros, Alexei A.
    Holynski, Aleksander
    Kanazawa, Angjoo
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 19683 - 19693
  • [25] OV-NeRF: Open-Vocabulary Neural Radiance Fields With Vision and Language Foundation Models for 3D Semantic Understanding
    Liao, Guibiao
    Zhou, Kaichen
    Bao, Zhenyu
    Liu, Kanglin
    Li, Qing
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (12) : 12923 - 12936
  • [26] KT-NeRF: multi-view anti-motion blur neural radiance fields
    Wang, Yining
    Zhang, Jinyi
    Jiang, Yuxi
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (03) : 33006
  • [27] STs-NeRF: Novel View Synthesis of Space Targets Based on Improved Neural Radiance Fields
    Ma, Kaidi
    Liu, Peixun
    Sun, Haijiang
    Teng, Jiawei
    REMOTE SENSING, 2024, 16 (13)
  • [28] PW-NeRF: Progressive wavelet-mask guided neural radiance fields view synthesis
    Han, Xuefei
    Liu, Zheng
    Nan, Hai
    Zhao, Kai
    Zhao, Dongjie
    Jin, Xiaodan
    IMAGE AND VISION COMPUTING, 2024, 147
  • [29] RecolorNeRF: Layer Decomposed Radiance Fields for Efficient Color Editing of 3D Scenes
    Gong, Bingchen
    Wang, Yuehao
    Han, Xiaoguang
    Dou, Qi
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8004 - 8015
  • [30] OptiViewNeRF: Optimizing 3D reconstruction via batch view selection and scene uncertainty in Neural Radiance Fields
    Li, You
    Li, Rui
    Li, Ziwei
    Guo, Renzhong
    Tang, Shengjun
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2025, 136