6D pose estimation of textureless shiny objects using random ferns for bin-picking

被引:0
作者
Rodrigues, Jose Jeronimo [1 ,3 ]
Kim, Jun-Sik [1 ]
Furukawa, Makoto [4 ]
Xavier, Joao [2 ,3 ]
Aguiar, Pedro [2 ,3 ]
Kanade, Takeo [1 ]
机构
[1] Carnegie Mellon Univ, Inst Robot, Pittsburgh, PA 15213 USA
[2] Univ Tecn Lisboa, Inst Super Tecn, Lisbon, Portugal
[3] Univ Tecn Lisboa, Inst Syst & Robot, Lisbon, Portugal
[4] Honda Engn Co Ltd, Kyoto, Japan
来源
2012 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | 2012年
关键词
RECOGNITION;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We address the problem of 6D pose estimation of a textureless and shiny object from single-view 2D images, for a bin-picking task. For a textureless object like a mechanical part, conventional visual feature matching usually fails due to the absence of rich texture features. Hierarchical template matching assumes that few templates can cover all object appearances. However, the appearance of a shiny object largely depends on its pose and illumination. Furthermore, in a bin-picking task, we must cope with partial occlusions, shadows, and inter-reflections. In this paper, we propose a purely data-driven method to tackle the pose estimation problem. Motivated by photometric stereo, we build an imaging system with multiple lights where each image channel is obtained under different lightning conditions. In an offline stage, we capture images of an object in several poses. Then, we train random ferns to map the appearance of small image patches into votes on the pose space. At runtime, each patch of the input image votes on possible pose hypotheses. We further show how to increase the accuracy of the object poses from our discretized pose hypotheses. Our experiments show that the proposed method can detect and estimate poses of textureless and shiny objects accurately and robustly within half a second.
引用
收藏
页码:3334 / 3341
页数:8
相关论文
共 39 条
  • [1] Instance segmentation based 6D pose estimation of industrial objects using point clouds for robotic bin-picking
    Zhuang, Chungang
    Li, Shaofei
    Ding, Han
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2023, 82
  • [2] Fast and precise 6D pose estimation of textureless objects using the point cloud and gray image
    Pan, Wang
    Zhu, Feng
    Hao, Yingming
    Zhang, Limin
    APPLIED OPTICS, 2018, 57 (28) : 8154 - 8165
  • [3] Fast Object Pose Estimation Using Adaptive Threshold for Bin-Picking
    Yan, Wu
    Xu, Zhihao
    Zhou, Xuefeng
    Su, Qianxin
    Li, Shuai
    Wu, Hongmin
    IEEE ACCESS, 2020, 8 (08): : 63055 - 63064
  • [4] CAD-based Pose Estimation Design for Random Bin Picking using a RGB-D Camera
    Song, Kai-Tai
    Wu, Cheng-Hei
    Jiang, Sin-Yi
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2017, 87 (3-4) : 455 - 470
  • [5] Semantic part segmentation method based 3D object pose estimation with RGB-D images for bin-picking
    Zhuang, Chungang
    Wang, Zhe
    Zhao, Heng
    Ding, Han
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2021, 68
  • [6] Context-aware 6D pose estimation of known objects using RGB-D data
    Kumar, Ankit
    Shukla, Priya
    Kushwaha, Vandana
    Nandi, Gora Chand
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (17) : 52973 - 52987
  • [7] MegaPose: 6D Pose Estimation of Novel Objects via Render& Compare
    Labbe, Yann
    Manuelli, Lucas
    Mousavian, Arsalan
    Tyree, Stephen
    Birchfield, Stan
    Tremblay, Jonathan
    Carpentier, Justin
    Aubry, Mathieu
    Fox, Dieter
    Sivic, Josef
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 715 - 725
  • [8] Robust 6D Pose Estimation for Texture-varying Objects in Autonomous System
    Fujiwaka, Masaya
    Nugami, Kousuke
    2023 IEEE 47TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC, 2023, : 39 - 44
  • [9] SilhoNet: An RGB Method for 6D Object Pose Estimation
    Billings, Gideon
    Johnson-Roberson, Matthew
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04): : 3727 - 3734
  • [10] DeepIM: Deep Iterative Matching for 6D Pose Estimation
    Li, Yi
    Wang, Gu
    Ji, Xiangyang
    Xiang, Yu
    Fox, Dieter
    COMPUTER VISION - ECCV 2018, PT VI, 2018, 11210 : 695 - 711