SDFusion: Multimodal 3D Shape Completion, Reconstruction, and Generation

被引:60
作者
Cheng, Yen-Chi [1 ]
Lee, Hsin-Ying [2 ]
Tulyakov, Sergey [2 ]
Schwing, Alexander [1 ]
Gui, Liangyan [1 ]
机构
[1] Univ Illinois, Champaign, IL 61820 USA
[2] Snap Res, Santa Monica, CA USA
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR | 2023年
关键词
D O I
10.1109/CVPR52729.2023.00433
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we present a novel framework built to simplify 3D asset generation for amateur users. To enable interactive generation, our method supports a variety of input modalities that can be easily provided by a human, including images, text, partially observed shapes and combinations of these, further allowing to adjust the strength of each input. At the core of our approach is an encoder-decoder, compressing 3D shapes into a compact latent representation, upon which a diffusion model is learned. To enable a variety of multi-modal inputs, we employ task-specific encoders with dropout followed by a cross-attention mechanism. Due to its flexibility, our model naturally supports a variety of tasks, outperforming prior works on shape completion, image-based 3D reconstruction, and text-to-3D. Most interestingly, our model can combine all these tasks into one swiss-army-knife tool, enabling the user to perform shape generation using incomplete shapes, images, and textual descriptions at the same time, providing the relative weights for each input and facilitating interactivity. Despite our approach being shape-only, we further show an efficient method to texture the generated shape using large-scale text-to-image models.
引用
收藏
页码:4456 / 4465
页数:10
相关论文
共 53 条
  • [1] Abdal Rameen, 2023, CVPR
  • [2] Achlioptas P., 2018, ICML
  • [3] Achlioptas Panos, 2019, CVPR
  • [4] [Anonymous], 2021, ICML
  • [5] [Anonymous], 2018, CVPR, DOI DOI 10.1109/CVPR.2018.00314
  • [6] Avrahami Omri, 2022, CVPR
  • [7] Chan Eric R, 2021, CVPR
  • [8] Chan Eric R, 2022, CVPR
  • [9] Chang A. X. M., 2015, ARXIV
  • [10] Chen Kevin, 2018, ACCV