HS-Pose: Hybrid Scope Feature Extraction for Category-level Object Pose Estimation

被引:38
作者
Zheng, Linfang [1 ,4 ]
Wang, Chen [1 ,2 ]
Sun, Yinghan [1 ]
Dasgupta, Esha [4 ]
Chen, Hua [1 ]
Leonardis, Ales [4 ]
Zhang, Wei [1 ,3 ]
Chang, Hyung Jin [4 ]
机构
[1] Southern Univ Sci & Technol, Dept Mech & Energy Engn, Shenzhen, Peoples R China
[2] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
[4] Univ Birmingham, Sch Comp Sci, Birmingham, England
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
基金
中国国家自然科学基金; 英国工程与自然科学研究理事会;
关键词
D O I
10.1109/CVPR52729.2023.01646
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we focus on the problem of category-level object pose estimation, which is challenging due to the large intra-category shape variation. 3D graph convolution (3D-GC) based methods have been widely used to extract local geometric features, but they have limitations for complex shaped objects and are sensitive to noise. Moreover, the scale and translation invariant properties of 3D-GC restrict the perception of an object's size and translation information. In this paper, we propose a simple network structure, the HS-layer, which extends 3D-GC to extract hybrid scope latent features from point cloud data for category-level object pose estimation tasks. The proposed HS-layer: 1) is able to perceive local-global geometric structure and global information, 2) is robust to noise, and 3) can encode size and translation information. Our experiments show that the simple replacement of the 3D-GC layer with the proposed HS-layer on the baseline method (GPV-Pose) achieves a significant improvement, with the performance increased by 14.5% on 5 degrees 2cm metric and 10.3% on IoU(75). Our method outperforms the state-of-the-art methods by a large margin (8.3% on 5 degrees 2cm, 6.9% on IoU(75)) on REAL275 dataset and runs in real-time (50 FPS)(1).
引用
收藏
页码:17163 / 17173
页数:11
相关论文
共 56 条
[31]  
Nguyen Van Nguyen, 2022, TEMPLATES 3D OBJECT, P2
[32]   Pix2Pose: Pixel-Wise Coordinate Regression of Objects for 6D Pose Estimation [J].
Park, Kiru ;
Patten, Timothy ;
Vincze, Markus .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :7667-7676
[33]  
Peng Sida, 2019, IEEE C COMP VIS PATT, P2
[34]   BB8: A Scalable, Accurate, Robust to Partial Occlusion Method for Predicting the 3D Poses of Challenging Objects without Using Depth [J].
Rad, Mahdi ;
Lepetit, Vincent .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3848-3856
[35]  
Sahin Caner, 2018, Category-level 6d object pose recovery in depth images, P2
[36]   OSOP: A Multi-Stage One Shot Object Pose Estimation Framework [J].
Shugurov, Ivan ;
Li, Fu ;
Busam, Benjamin ;
Ilic, Slobodan .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :6825-6834
[37]   Deep Multi-State Object Pose Estimation for Augmented Reality Assembly [J].
Su, Yongzhi ;
Rambach, Jason ;
Minaskan, Nareg ;
Lesur, Paul ;
Pagani, Alain ;
Stricker, Didier .
ADJUNCT PROCEEDINGS OF THE 2019 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR-ADJUNCT 2019), 2019, :222-227
[38]  
Su Yongzhi, 2022, ZebraPose: Coarse to fine surface encoding for 6DoF object pose estimation, P2
[39]   OnePose: One-Shot Object Pose Estimation without CAD Models [J].
Sun, Jiaming ;
Wang, Zihao ;
Zhang, Siyu ;
He, Xingyi ;
Zhao, Hongcheng ;
Zhang, Guofeng ;
Zhou, Xiaowei .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :6815-6824
[40]   Augmented Autoencoders: Implicit 3D Orientation Learning for 6D Object Detection [J].
Sundermeyer, Martin ;
Marton, Zoltan-Csaba ;
Durner, Maximilian ;
Triebel, Rudolph .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (03) :714-729