Glissando-Net: Deep Single View Category Level Pose Estimation and 3D Reconstruction

被引:0
作者
Sun, Bo [1 ]
Kang, Hao [2 ]
Guan, Li [3 ]
Li, Haoxiang [4 ]
Mordohai, Philippos [5 ]
Hua, Gang [6 ]
机构
[1] Adobe Inc, San Jose, CA 95110 USA
[2] ByteDance Inc, Bellevue, WA USA
[3] Meta Real Labs, Menlo Pk, CA USA
[4] Pixocial Technol, Bellevue, WA USA
[5] Stevens Inst Technol, Hoboken, NJ USA
[6] Dolby Labs, Bellevue, WA USA
基金
美国国家科学基金会;
关键词
Shape; Three-dimensional displays; Point cloud compression; Solid modeling; Training; Decoding; Pose estimation; Image reconstruction; Predictive models; Transforms; 3D shape reconstruction; 3D pose estimation; single view 3D shape estimation; variational autoencoder;
D O I
10.1109/TPAMI.2024.3519674
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a deep learning model, dubbed Glissando-Net, to simultaneously estimate the pose and reconstruct the 3D shape of objects at the category level from a single RGB image. Previous works predominantly focused on either estimating poses (often at the instance level), or reconstructing shapes, but not both. Glissando-Net is composed of two auto-encoders that are jointly trained, one for RGB images and the other for point clouds. We embrace two key design choices in Glissando-Net to achieve a more accurate prediction of the 3D shape and pose of the object given a single RGB image as input. First, we augment the feature maps of the point cloud encoder and decoder with transformed feature maps from the image decoder, enabling effective 2D-3D interaction in both training and prediction. Second, we predict both the 3D shape and pose of the object in the decoder stage. This way, we better utilize the information in the 3D point clouds presented only in the training stage to train the network for more accurate prediction. We jointly train the two encoder-decoders for RGB and point cloud data to learn how to pass latent features to the point cloud decoder during inference. In testing, the encoder of the 3D point cloud is discarded. The design of Glissando-Net is inspired by codeSLAM. Unlike codeSLAM, which targets 3D reconstruction of scenes, we focus on pose estimation and shape reconstruction of objects, and directly predict the object pose and a pose invariant 3D reconstruction without the need of the code optimization step. Extensive experiments, involving both ablation studies and comparison with competing methods, demonstrate the efficacy of our proposed method, and compare favorably with the state-of-the-art.
引用
收藏
页码:2298 / 2312
页数:15
相关论文
共 78 条
  • [1] Objectron: A Large Scale Dataset of Object-Centric Videos in the Wild with Pose Annotations
    Ahmadyan, Adel
    Zhang, Liangkai
    Ablavatski, Artsiom
    Wei, Jianing
    Grundmann, Matthias
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7818 - 7827
  • [2] Avetisyan A, 2019, Arxiv, DOI arXiv:1906.04201
  • [3] Scan2CAD: Learning CAD Model Alignment in RGB-D Scans
    Avetisyan, Armen
    Dahnert, Manuel
    Dai, Angela
    Savva, Manolis
    Chang, Angel X.
    Niessner, Matthias
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2609 - 2618
  • [4] Learning Meshes for Dense Visual SLAM
    Bloesch, Michael
    Laidlow, Tristan
    Clark, Ronald
    Leutenegger, Stefan
    Davison, Andrew J.
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 5854 - 5863
  • [5] CodeSLAM-Learning a Compact, Optimisable Representation for Dense Visual SLAM
    Bloesch, Michael
    Czarnowski, Jan
    Clark, Ronald
    Leutenegger, Stefan
    Davison, Andrew J.
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2560 - 2568
  • [6] Uncertainty-Driven 6D Pose Estimation of Objects and Scenes from a Single RGB Image
    Brachmann, Eric
    Michel, Frank
    Krull, Alexander
    Yang, Michael Ying
    Gumhold, Stefan
    Rother, Carsten
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 3364 - 3372
  • [7] Chen DS, 2020, PROC CVPR IEEE, P11970, DOI 10.1109/CVPR42600.2020.01199
  • [8] Chen Wang, 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA), P10059, DOI 10.1109/ICRA40945.2020.9196679
  • [9] G2L-Net: Global to Local Network for Real-time 6D Pose Estimation with Embedding Vector Features
    Chen, Wei
    Jia, Xi
    Chang, Hyung Jin
    Duan, Jinming
    Leonardis, Ales
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4232 - 4241
  • [10] Monocular 3D Object Detection for Autonomous Driving
    Chen, Xiaozhi
    Kundu, Kaustav
    Zhang, Ziyu
    Ma, Huimin
    Fidler, Sanja
    Urtasun, Raquel
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2147 - 2156