Spatial feature mapping for 6DoF object pose estimation

被引:10
|
作者
Mei, Jianhan [1 ]
Jiang, Xudong [1 ]
Ding, Henghui [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore, Singapore
关键词
6D Pose estimation; Rotation symmetry; Spherical convolution; Graph convolutional network; RECOGNITION; SYMMETRY;
D O I
10.1016/j.patcog.2022.108835
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work aims to estimate 6Dof (6D) object pose in background clutter. Considering the strong occlu-sion and background noise, we propose to utilize the spatial structure for better tackling this challenging task. Observing that the 3D mesh can be naturally abstracted by a graph, we build the graph using 3D points as vertices and mesh connections as edges. We construct the corresponding mapping from 2D im-age features to 3D points for filling the graph and fusion of the 2D and 3D features. Afterward, a Graph Convolutional Network (GCN) is applied to help the feature exchange among objects' points in 3D space. To address the problem of rotation symmetry ambiguity for objects, a spherical convolution is utilized and the spherical features are combined with the convolutional features that are mapped to the graph. Predefined 3D keypoints are voted and the 6DoF pose is obtained via the fitting optimization. Two sce-narios of inference, one with the depth information and the other without it are discussed. Tested on the datasets of YCB-Video and LINEMOD, the experiments demonstrate the effectiveness of our proposed method.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] 6DOF Needle Pose Estimation for Robot-Assisted Vitreoretinal Surgery
    Zhou, Mingchuan
    Hao, Xing
    Eslami, Abouzar
    Huang, Kai
    Cai, Caixia
    Lohmann, Chris P.
    Navab, Nassir
    Knoll, Alois
    Nasseri, M. Ali
    IEEE ACCESS, 2019, 7 : 63113 - 63122
  • [32] Real-time scalable 6DOF pose estimation for textureless objects
    Cao, Zhe
    Sheikh, Yaser
    Banerjee, Natasha Kholgade
    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2016, : 2441 - 2448
  • [33] Summarizing image/surface registration for 6DOF robot/camera pose estimation
    Batlle, Elisabet
    Matabosch, Carles
    Salvi, Joaquim
    PATTERN RECOGNITION AND IMAGE ANALYSIS, PT 2, PROCEEDINGS, 2007, 4478 : 105 - +
  • [34] ParametricNet: 6DoF Pose Estimation Network for Parametric Shapes in Stacked Scenarios
    Zeng, Long
    Lv, Wei Jie
    Zhang, Xin Yu
    Liu, Yong Jin
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 772 - 778
  • [35] PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation
    Peng, Sida
    Liu, Yuan
    Huang, Qixing
    Zhou, Xiaowei
    Bao, Hujun
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4556 - 4565
  • [36] Keypoint Cascade Voting for Point Cloud Based 6DoF Pose Estimation
    Wu, Yangzheng
    Javaheri, Alireza
    Zand, Mohsen
    Greenspan, Michael
    2022 INTERNATIONAL CONFERENCE ON 3D VISION, 3DV, 2022, : 176 - 186
  • [37] Optimizing RGB-D Fusion for Accurate 6DoF Pose Estimation
    Saadi, Lounes
    Besbes, Bassem
    Kramm, Sebastien
    Bensrhair, Abdelaziz
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02): : 2413 - 2420
  • [38] A Study on the Impact of Domain Randomization for Monocular Deep 6DoF Pose Estimation
    da Cunha, Kelvin B.
    Brito, Caio
    Valenca, Luas
    Simoes, Francisco
    Teichrieb, Veronica
    2020 33RD SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI 2020), 2020, : 332 - 339
  • [39] Refining Weights for Enhanced Object Similarity in Multi-perspective 6Dof Pose Estimation and 3D Object Detection
    Kusumo, Budiarianto Suryo
    Thomas, Ulrike
    DEEP LEARNING THEORY AND APPLICATIONS, PT I, DELTA 2024, 2024, 2171 : 310 - 327
  • [40] Toward 6 DOF Object Pose Estimation with Minimum Dataset
    Suzui, Kota
    Yoshiyasu, Yusuke
    Gabas, Antonio
    Kanehiro, Fumio
    Yoshida, Eiichi
    2019 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII), 2019, : 462 - 467