Glissando-Net: Deep Single View Category Level Pose Estimation and 3D Reconstruction

被引:0
作者
Sun, Bo [1 ]
Kang, Hao [2 ]
Guan, Li [3 ]
Li, Haoxiang [4 ]
Mordohai, Philippos [5 ]
Hua, Gang [6 ]
机构
[1] Adobe Inc, San Jose, CA 95110 USA
[2] ByteDance Inc, Bellevue, WA USA
[3] Meta Real Labs, Menlo Pk, CA USA
[4] Pixocial Technol, Bellevue, WA USA
[5] Stevens Inst Technol, Hoboken, NJ USA
[6] Dolby Labs, Bellevue, WA USA
基金
美国国家科学基金会;
关键词
Shape; Three-dimensional displays; Point cloud compression; Solid modeling; Training; Decoding; Pose estimation; Image reconstruction; Predictive models; Transforms; 3D shape reconstruction; 3D pose estimation; single view 3D shape estimation; variational autoencoder;
D O I
10.1109/TPAMI.2024.3519674
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a deep learning model, dubbed Glissando-Net, to simultaneously estimate the pose and reconstruct the 3D shape of objects at the category level from a single RGB image. Previous works predominantly focused on either estimating poses (often at the instance level), or reconstructing shapes, but not both. Glissando-Net is composed of two auto-encoders that are jointly trained, one for RGB images and the other for point clouds. We embrace two key design choices in Glissando-Net to achieve a more accurate prediction of the 3D shape and pose of the object given a single RGB image as input. First, we augment the feature maps of the point cloud encoder and decoder with transformed feature maps from the image decoder, enabling effective 2D-3D interaction in both training and prediction. Second, we predict both the 3D shape and pose of the object in the decoder stage. This way, we better utilize the information in the 3D point clouds presented only in the training stage to train the network for more accurate prediction. We jointly train the two encoder-decoders for RGB and point cloud data to learn how to pass latent features to the point cloud decoder during inference. In testing, the encoder of the 3D point cloud is discarded. The design of Glissando-Net is inspired by codeSLAM. Unlike codeSLAM, which targets 3D reconstruction of scenes, we focus on pose estimation and shape reconstruction of objects, and directly predict the object pose and a pose invariant 3D reconstruction without the need of the code optimization step. Extensive experiments, involving both ablation studies and comparison with competing methods, demonstrate the efficacy of our proposed method, and compare favorably with the state-of-the-art.
引用
收藏
页码:2298 / 2312
页数:15
相关论文
共 78 条
  • [11] Choi C, 2012, IEEE INT C INT ROBOT, P3342, DOI 10.1109/IROS.2012.6386067
  • [12] 3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction
    Choy, Christopher B.
    Xu, Danfei
    Gwak, Jun Young
    Chen, Kevin
    Savarese, Silvio
    [J]. COMPUTER VISION - ECCV 2016, PT VIII, 2016, 9912 : 628 - 644
  • [13] A Point Set Generation Network for 3D Object Reconstruction from a Single Image
    Fan, Haoqiang
    Su, Hao
    Guibas, Leonidas
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2463 - 2471
  • [14] Object Level Depth Reconstruction for Category Level 6D Object Pose Estimation from Monocular RGB Image
    Fan, Zhaoxin
    Song, Zhenbo
    Xu, Jian
    Wang, Zhicheng
    Wu, Kejian
    Liu, Hongyan
    He, Jun
    [J]. COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 220 - 236
  • [15] Learning Local RGB-to-CAD Correspondences for Object Pose Estimation
    Georgakis, Georgios
    Karanam, Srikrishna
    Wu, Ziyan
    Kosecka, Jana
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8966 - 8975
  • [16] Mesh R-CNN
    Gkioxari, Georgia
    Malik, Jitendra
    Johnson, Justin
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 9784 - 9794
  • [17] GP2C: Geometric Projection Parameter Consensus for Joint 3D Pose and Focal Length Estimation in the Wild
    Grabner, Alexander
    Roth, Peter M.
    Lepetit, Vincent
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2222 - 2231
  • [18] A Papier-Mache Approach to Learning 3D Surface Generation
    Groueix, Thibault
    Fisher, Matthew
    Kim, Vladimir G.
    Russell, Bryan C.
    Aubry, Mathieu
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 216 - 224
  • [19] Weakly supervised 3D Reconstruction with Adversarial Constraint
    Gwak, JunYoung
    Choy, Christopher B.
    Chandraker, Manmohan
    Garg, Animesh
    Savarese, Silvio
    [J]. PROCEEDINGS 2017 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2017, : 263 - 272
  • [20] He KM, 2017, IEEE I CONF COMP VIS, P2980, DOI [10.1109/TPAMI.2018.2844175, 10.1109/ICCV.2017.322]