Unsupervised 3D Shape Reconstruction by Part Retrieval and Assembly

被引:3
|
作者
Xi, Xianghao [1 ]
Guerrero, Paul [2 ,3 ,4 ]
Fisher, Matthew [2 ,3 ,4 ]
Chaudhuri, Siddhartha [2 ,3 ,4 ]
Ritchie, Daniel [1 ]
机构
[1] Brown Univ, Providence, RI 02912 USA
[2] Adobe Res, London, England
[3] Adobe Res, San Francisco, CA USA
[4] Adobe Res, Bangalore, Karnataka, India
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.00827
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Representing a 3D shape with a set of primitives can aid perception of structure, improve robotic object manipulation, and enable editing, stylization, and compression of 3D shapes. Existing methods either use simple parametric primitives or learn a generative shape space of parts. Both have limitations: parametric primitives lead to coarse approximations, while learned parts offer too little control over the decomposition. We instead propose to decompose shapes using a library of 3D parts provided by the user, giving full control over the choice of parts. The library can contain parts with high-quality geometry that are suitable for a given category, resulting in meaningful decompositions with clean geometry. The type of decomposition can also be controlled through the choice of parts in the library. Our method works via a unsupervised approach that iteratively retrieves parts from the library and refines their placements. We show that this approach gives higher reconstruction accuracy and more desirable decompositions than existing approaches. Additionally, we show how the decomposition can be controlled through the part library by using different part libraries to reconstruct the same shapes.
引用
收藏
页码:8559 / 8567
页数:9
相关论文
共 50 条
  • [1] Universal unsupervised cross-domain 3D shape retrieval
    Heyu Zhou
    Fan Wang
    Qipei Liu
    Jiayu Li
    Wen Liu
    Xuanya Li
    An-An Liu
    Multimedia Systems, 2024, 30
  • [2] Universal unsupervised cross-domain 3D shape retrieval
    Zhou, Heyu
    Wang, Fan
    Liu, Qipei
    Li, Jiayu
    Liu, Wen
    Li, Xuanya
    Liu, An-An
    MULTIMEDIA SYSTEMS, 2024, 30 (01)
  • [3] PANORAMA: A 3D Shape Descriptor Based on Panoramic Views for Unsupervised 3D Object Retrieval
    Panagiotis Papadakis
    Ioannis Pratikakis
    Theoharis Theoharis
    Stavros Perantonis
    International Journal of Computer Vision, 2010, 89 : 177 - 192
  • [4] PANORAMA: A 3D Shape Descriptor Based on Panoramic Views for Unsupervised 3D Object Retrieval
    Papadakis, Panagiotis
    Pratikakis, Ioannis
    Theoharis, Theoharis
    Perantonis, Stavros
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 89 (2-3) : 177 - 192
  • [5] Multi-graph Convolutional Network for Unsupervised 3D Shape Retrieval
    Nie, Weizhi
    Zhao, Yue
    Liu, An-An
    Gao, Zan
    Su, Yuting
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 3395 - 3403
  • [6] Unsupervised 3D Reconstruction Networks
    Cha, Geonho
    Lee, Minsik
    Oh, Songhwai
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3848 - 3857
  • [7] Unsupervised Shape Enhancement and Factorization Machine Network for 3D Face Reconstruction
    Yang, Leyang
    Zhang, Boyang
    Gong, Jianchang
    Wang, Xueming
    Li, Xiangzheng
    Ma, Kehua
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 209 - 220
  • [8] Unsupervised 3D Articulated Object Correspondences with Part Approximation and Shape Refinement
    Diao, Junqi
    Jiang, Haiyong
    Yan, Feilong
    Zhang, Yong
    Luan, Jinhui
    Xiao, Jun
    COMPUTER-AIDED DESIGN AND COMPUTER GRAPHICS, CAD/GRAPHICS 2023, 2024, 14250 : 1 - 15
  • [9] Differential 3D shape retrieval
    Matias Di Martino, J.
    Fernandez, Alicia
    Ayubi, Gaston A.
    Ferrari, Jose A.
    OPTICS AND LASERS IN ENGINEERING, 2014, 58 : 114 - 118
  • [10] 3D GAUSSIAN DESCRIPTOR FOR 3D SHAPE RETRIEVAL
    Chaouch, Mohamed
    Verroust-Blondet, Anne
    ICME: 2009 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOLS 1-3, 2009, : 834 - 837