Vox-E: Text-guided Voxel Editing of 3D Objects

被引:21
作者
Sella, Etai [1 ]
Fiebelman, Gal [1 ]
Hedman, Peter [2 ]
Averbuch-Elor, Hadar [1 ]
机构
[1] Tel Aviv Univ, Tel Aviv, Israel
[2] Google Res, New York, NY 10011 USA
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV | 2023年
关键词
D O I
10.1109/ICCV51070.2023.00046
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large scale text-guided diffusion models have garnered significant attention due to their ability to synthesize diverse images that convey complex visual concepts. This generative power has more recently been leveraged to perform text-to-3D synthesis. In this work, we present a technique that harnesses the power of latent diffusion models for editing existing 3D objects. Our method takes oriented 2D images of a 3D object as input and learns a grid-based volumetric representation of it. To guide the volumetric representation to conform to a target text prompt, we follow unconditional text-to-3D methods and optimize a Score Distillation Sampling (SDS) loss. However, we observe that combining this diffusion-guided loss with an image-based regularization loss that encourages the representation not to deviate too strongly from the input object is challenging, as it requires achieving two conflicting goals while viewing only structure-and-appearance coupled 2D projections. Thus, we introduce a novel volumetric regularization loss that operates directly in 3D space, utilizing the explicit nature of our 3D representation to enforce correlation between the global structure of the original and edited object. Furthermore, we present a technique that optimizes cross-attention volumetric grids to refine the spatial extent of the edits. Extensive experiments and comparisons demonstrate the effectiveness of our approach in creating a myriad of edits which cannot be achieved by prior works(1).
引用
收藏
页码:430 / 440
页数:11
相关论文
共 50 条
  • [21] Karnewar Animesh, ACM SIGGRAPH 2022 C, P1
  • [22] Kawar B, 2023, Arxiv, DOI arXiv:2210.09276
  • [23] Kobayashi S., 2022, P PROC ANN C NEUR IN, V35, P23311
  • [24] Lin CH, 2023, Arxiv, DOI arXiv:2211.10440
  • [25] Liu S., 2021, P IEEE CVF INT C COM, P5773
  • [26] Meng C., 2022, INT C LEARN REPR
  • [27] Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
    Metzer, Gal
    Richardson, Elad
    Patashnik, Or
    Giryes, Raja
    Cohen-Or, Daniel
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 12663 - 12673
  • [28] Text2Mesh: Text-Driven Neural Stylization for Meshes
    Michel, Oscar
    Bar-On, Roi
    Liu, Richard
    Benaim, Sagie
    Hanocka, Rana
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13482 - 13492
  • [29] Mildenhall B, 2022, COMMUN ACM, V65, P99, DOI 10.1145/3503250
  • [30] Nichol A, 2022, PR MACH LEARN RES