Visual perception for the 3D recognition of geometric pieces in robotic manipulation

被引:17
|
作者
Mateo, C. M. [1 ]
Gil, P. [1 ]
Torres, F. [1 ]
机构
[1] Univ Alicante, Phys Syst Engn & Signal Theory Dept, Alicante 03690, Spain
来源
INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY | 2016年 / 83卷 / 9-12期
关键词
3D object recognition; 3D shape detection; Pose estimation; Robotic manipulation; Geometric objects; Surfaces;
D O I
10.1007/s00170-015-7708-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
During grasping and intelligent robotic manipulation tasks, the camera position relative to the scene changes dramatically because the robot is moving to adapt its path and correctly grasp objects. This is because the camera is mounted at the robot effector. For this reason, in this type of environment, a visual recognition system must be implemented to recognize and "automatically and autonomously" obtain the positions of objects in the scene. Furthermore, in industrial environments, all objects that are manipulated by robots are made of the same material and cannot be differentiated by features such as texture or color. In this work, first, a study and analysis of 3D recognition descriptors has been completed for application in these environments. Second, a visual recognition system designed from specific distributed client-server architecture has been proposed to be applied in the recognition process of industrial objects without these appearance features. Our system has been implemented to overcome problems of recognition when the objects can only be recognized by geometric shape and the simplicity of shapes could create ambiguity. Finally, some real tests are performed and illustrated to verify the satisfactory performance of the proposed system.
引用
收藏
页码:1999 / 2013
页数:15
相关论文
共 50 条
  • [31] Picture perception reveals mental geometry of 3D scene inferences
    Koch, Erin
    Baig, Famya
    Zaidi, Qasim
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2018, 115 (30) : 7807 - 7812
  • [32] A new method to estimate the pose of an arbitrary 3D object without prerequisite knowledge: projection-based 3D perception
    Kou, Yejun
    Toda, Yuichiro
    Minami, Mamoru
    ARTIFICIAL LIFE AND ROBOTICS, 2022, 27 (01) : 149 - 158
  • [33] A new method to estimate the pose of an arbitrary 3D object without prerequisite knowledge: projection-based 3D perception
    Yejun Kou
    Yuichiro Toda
    Mamoru Minami
    Artificial Life and Robotics, 2022, 27 : 149 - 158
  • [34] Visual Feedback for Core Training with 3D Human Shape and Pose
    Xie, Haoran
    Watatani, Atsushi
    Miyata, Kazunori
    2019 NICOGRAPH INTERNATIONAL (NICOINT), 2019, : 49 - 56
  • [35] Visual Localization using Imperfect 3D Models from the Internet
    Panek, Vojtech
    Kukelova, Zuzana
    Sattler, Torsten
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 13175 - 13186
  • [36] Evaluation of implicit 3D modeling for pose invariant face recognition
    Hüsken, M
    Brauckmann, M
    Gehlen, S
    Okada, K
    von der Malsburg, C
    BIOMETRIC TECHNOLOGY FOR HUMAN IDENTIFICATION, 2004, 5404 : 328 - 338
  • [37] Field testing of a 3D target recognition and pose estimation algorithm
    Ruel, S
    English, C
    Melo, L
    Berube, A
    Aikman, D
    Deslauriers, A
    Church, P
    Maheux, J
    AUTOMATIC TARGET RECOGNITION XIV, 2004, 5426 : 102 - 111
  • [38] Object Recognition in 3D Point Clouds with Maximum Likelihood Estimation
    Dantanarayana, Harshana G.
    Huntley, Jonathan M.
    AUTOMATED VISUAL INSPECTION AND MACHINE VISION, 2015, 9530
  • [39] Combining depth and gray images for fast 3D object recognition
    Pan, Wang
    Zhu, Feng
    Hao, Yingming
    OPTICAL MEASUREMENT TECHNOLOGY AND INSTRUMENTATION, 2016, 10155
  • [40] 3D ASSISTED FACE RECOGNITION VIA PROGRESSIVE POSE ESTIMATION
    Zhang, Wuming
    Huang, Di
    Samaras, Dimitris
    Morvan, Jean-Marie
    Wang, Yunhong
    Chen, Liming
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 728 - 732