Pose Estimation of Robot End-Effector using a CNN-Based Cascade Estimator

被引:0
作者
Ortega, Kevin D. [1 ]
Sepulveda, Jorge I. [1 ]
Hernandez, Byron [2 ]
Holguin, German A. [1 ,3 ]
Medeiros, Henry [2 ]
机构
[1] Univ Tecnol Pereria, Dept Elect Engn, Pereira, Colombia
[2] Univ Florida, Dept Ag & Bio Engn, Gainesville, FL USA
[3] Marquette Univ, Dept Elect & Comp Engn, Milwaukee, WI 53233 USA
来源
2023 IEEE 6TH COLOMBIAN CONFERENCE ON AUTOMATIC CONTROL, CCAC | 2023年
关键词
Computer vision; Pose Estimation; Robotic Manipulation; Neural Networks; Industry; 4.0;
D O I
10.1109/CCAC58200.2023.10333441
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Robotics has become an essential component of virtually every industry sector, including modern assembly, manufacturing, agricultural processes, and even retail operations. Ensuring the safety of human collaborators working alongside robots is of utmost importance, and accurate estimation of the robot's end-effector pose is critical for achieving this goal. In this paper, we present a method to estimate the end-effector pose of an industrial serial manipulator without relying on the robot's encoders. Our method uses depth cameras in the robot workspace, feeding a convolutional neural network and a cascade estimator to determine the 3D coordinates of every joint sequentially. We studied three variations of the method, with the first estimating all joints independently, the second estimating every joint based on the estimation of the previous joint, and the third using all the previous joints. Our experimental setup employed a UR5 6DOF robot arm in the ROS-Gazebo simulation ecosystem. The results show that the third variation exhibits the best performance, demonstrating the effectiveness of this methodology in predicting the pose of a manipulator using only computer vision. This approach is an enabling technology for many Industry 4.0 applications, improving both human safety and manufacturing efficiency.
引用
收藏
页码:85 / 90
页数:6
相关论文
共 14 条
  • [1] Uncalibrated stereo vision with deep learning for 6-DOF pose estimation for a robot arm system
    Abdelaal, Mahmoud
    Farag, Ramy M. A.
    Saad, Mohamed S.
    Bahgat, Ahmed
    Emara, Hassan M.
    El-Dessouki, Ayman
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2021, 145
  • [2] Review of prominent strategies for mapping CNNs onto embedded systems
    Arredondo-Velazquez, Moises
    Diaz-Carmona, Javier
    Barranco-Gutierrez, Alejandro-Israel
    Torres-Huitzil, Cesar
    [J]. IEEE LATIN AMERICA TRANSACTIONS, 2020, 18 (05) : 971 - 982
  • [3] Machine Learning in Manufacturing towards Industry 4.0: From 'For Now' to 'Four-Know'
    Chen, Tingting
    Sampath, Vignesh
    May, Marvin Carl
    Shan, Shuo
    Jorg, Oliver Jonas
    Aguilar Martin, Juan Jose
    Stamer, Florian
    Fantoni, Gualtiero
    Tosello, Guido
    Calaon, Matteo
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (03):
  • [4] Comparison of Deep Learning Models in Position Based Visual Servoing
    Copot, Cosmin
    Shi, Lei
    Smet, Elke
    Ionescu, Clara
    Vanlanduit, Steve
    [J]. 2022 IEEE 27TH INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), 2022,
  • [5] Craig J. J., 2006, Introduction to Robotics
  • [6] Learning robots to grasp by demonstration
    De Coninck, Elias
    Verbelen, Tim
    Van Molle, Pieter
    Simoens, Pieter
    Dhoedt, Bart
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2020, 127
  • [7] Cobot programming for collaborative industrial tasks: An overview
    El Zaatari, Shirine
    Marei, Mohamed
    Li, Weidong
    Usman, Zahid
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 116 : 162 - 180
  • [8] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [9] Miseikis J, 2018, IEEE ASME INT C ADV, P181, DOI 10.1109/AIM.2018.8452236
  • [10] Parisotto T, 2021, Arxiv, DOI arXiv:2103.09863