HS-Pose: Hybrid Scope Feature Extraction for Category-level Object Pose Estimation

被引:22
作者
Zheng, Linfang [1 ,4 ]
Wang, Chen [1 ,2 ]
Sun, Yinghan [1 ]
Dasgupta, Esha [4 ]
Chen, Hua [1 ]
Leonardis, Ales [4 ]
Zhang, Wei [1 ,3 ]
Chang, Hyung Jin [4 ]
机构
[1] Southern Univ Sci & Technol, Dept Mech & Energy Engn, Shenzhen, Peoples R China
[2] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
[4] Univ Birmingham, Sch Comp Sci, Birmingham, England
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52729.2023.01646
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we focus on the problem of category-level object pose estimation, which is challenging due to the large intra-category shape variation. 3D graph convolution (3D-GC) based methods have been widely used to extract local geometric features, but they have limitations for complex shaped objects and are sensitive to noise. Moreover, the scale and translation invariant properties of 3D-GC restrict the perception of an object's size and translation information. In this paper, we propose a simple network structure, the HS-layer, which extends 3D-GC to extract hybrid scope latent features from point cloud data for category-level object pose estimation tasks. The proposed HS-layer: 1) is able to perceive local-global geometric structure and global information, 2) is robust to noise, and 3) can encode size and translation information. Our experiments show that the simple replacement of the 3D-GC layer with the proposed HS-layer on the baseline method (GPV-Pose) achieves a significant improvement, with the performance increased by 14.5% on 5 degrees 2cm metric and 10.3% on IoU(75). Our method outperforms the state-of-the-art methods by a large margin (8.3% on 5 degrees 2cm, 6.9% on IoU(75)) on REAL275 dataset and runs in real-time (50 FPS)(1).
引用
收藏
页码:17163 / 17173
页数:11
相关论文
共 56 条
  • [41] Augmented Autoencoders: Implicit 3D Orientation Learning for 6D Object Detection
    Sundermeyer, Martin
    Marton, Zoltan-Csaba
    Durner, Maximilian
    Triebel, Rudolph
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (03) : 714 - 729
  • [42] Sundermeyer Martin, 2019, MULTIPATH LEARNING O, P2
  • [43] Real-Time Seamless Single Shot 6D Object Pose Prediction
    Tekin, Bugra
    Sinha, Sudipta N.
    Fua, Pascal
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 292 - 301
  • [44] Tian M., 2020, P EUR C COMP VIS ECC
  • [45] Tremblay Jonathan, 2018, C ROB LEARN CORL, P306
  • [46] Vidal Joel, 2018, 6D POSE ESTIMATION U, P2
  • [47] Normalized Object Coordinate Space for Category-Level 6D Object Pose and Size Estimation
    Wang, He
    Sridhar, Srinath
    Huang, Jingwei
    Valentin, Julien
    Song, Shuran
    Guibas, Leonidas J.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2637 - 2646
  • [48] Category-Level 6D Object Pose Estimation via Cascaded Relation and Recurrent Reconstruction Networks
    Wang, Jiaze
    Chen, Kai
    Dou, Qi
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 4807 - 4814
  • [49] BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models
    Wen, Bowen
    Bekris, Kostas
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 8067 - 8074
  • [50] Weng Yijia, 2021, ARXIV210403437