A New Kinect V2-Based Method for Visual Recognition and Grasping of a Yarn-Bobbin-Handling Robot

被引:9
作者
Han, Jinghai [1 ]
Liu, Bo [2 ]
Jia, Yongle [2 ]
Jin, Shoufeng [2 ]
Sulowicz, Maciej [3 ]
Glowacz, Adam [3 ]
Krolczyk, Grzegorz [4 ]
Li, Zhixiong [4 ,5 ]
机构
[1] Nanjing Vocat Inst Transport Technol, Inst Rail Transport, Nanjing 211188, Peoples R China
[2] Xian Polytech Univ, Coll Mech & Elect Engn, Xian 710600, Peoples R China
[3] Cracow Univ Technol, Dept Elect Engn, PL-31155 Krakow, Poland
[4] Opole Univ Technol, Dept Mfg Engn & Automat Prod, PL-45758 Opole, Poland
[5] Yonsei Univ, Yonsei Frontier Lab, Seoul 03722, South Korea
关键词
machine vision; industrial robots; yarn bobbin identification; robot control; POINT CLOUD;
D O I
10.3390/mi13060886
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
This work proposes a Kinect V2-based visual method to solve the human dependence on the yarn bobbin robot in the grabbing operation. In this new method, a Kinect V2 camera is used to produce three-dimensional (3D) yarn-bobbin point cloud data for the robot in a work scenario. After removing the noise point cloud through a proper filtering process, the M-estimator sample consensus (MSAC) algorithm is employed to find the fitting plane of the 3D cloud data; then, the principal component analysis (PCA) is adopted to roughly register the template point cloud and the yarn-bobbin point cloud to define the initial position of the yarn bobbin. Lastly, the iterative closest point (ICP) algorithm is used to achieve precise registration of the 3D cloud data to determine the precise pose of the yarn bobbin. To evaluate the performance of the proposed method, an experimental platform is developed to validate the grabbing operation of the yarn bobbin robot in different scenarios. The analysis results show that the average working time of the robot system is within 10 s, and the grasping success rate is above 80%, which meets the industrial production requirements.
引用
收藏
页数:11
相关论文
共 28 条
  • [1] Deep learning-based method for vision-guided robotic grasping of unknown objects
    Bergamini, Luca
    Sposato, Mario
    Pellicciari, Marcello
    Peruzzini, Margherita
    Calderara, Simone
    Schmidt, Juliana
    [J]. ADVANCED ENGINEERING INFORMATICS, 2020, 44
  • [2] A novel flexible two-step method for eye-to-hand calibration for robot assembly system
    Cui, Haihua
    Sun, Ruichao
    Fang, Zhou
    Lou, Huacheng
    Tian, Wei
    Liao, Wenhe
    [J]. MEASUREMENT & CONTROL, 2020, 53 (9-10) : 2020 - 2029
  • [3] A study on picking objects in cluttered environments: Exploiting depth features for a custom low-cost universal jamming gripper
    D'Avella, Salvatore
    Tripicchio, Paolo
    Avizzano, Carlo Alberto
    [J]. ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2020, 63
  • [4] Eye-to-hand robotic tracking and grabbing based on binocular vision
    Du, Yi-Chun
    Taryudi, Taryudi
    Tsai, Ching-Tang
    Wang, Ming-Shyan
    [J]. MICROSYSTEM TECHNOLOGIES-MICRO-AND NANOSYSTEMS-INFORMATION STORAGE AND PROCESSING SYSTEMS, 2021, 27 (04): : 1699 - 1710
  • [5] Automatic Super-Surface Removal in Complex 3D Indoor Environments Using Iterative Region-Based RANSAC
    Ebrahimi, Ali
    Czarnuch, Stephen
    [J]. SENSORS, 2021, 21 (11)
  • [6] An Automatic Assembling System for Sealing Rings Based on Machine Vision
    Gao, Mingyu
    Li, Xiao
    He, Zhiwei
    Yang, Yuxiang
    [J]. JOURNAL OF SENSORS, 2017, 2017
  • [7] A review of algorithms for filtering the 3D point cloud
    Han, Xian-Feng
    Jin, Jesse S.
    Wang, Ming-Jie
    Jiang, Wei
    Gao, Lei
    Xiao, Liping
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2017, 57 : 103 - 112
  • [8] Grasping Control Method of Manipulator Based on Binocular Vision Combining Target Detection and Trajectory Planning
    Han, Yi
    Zhao, Kai
    Chu, Zenan
    Zhou, Yan
    [J]. IEEE ACCESS, 2019, 7 : 167973 - 167981
  • [9] Pipe pose estimation based on machine vision
    Hu, Jia
    Liu, Shaoli
    Liu, Jianhua
    Wang, Zhi
    Huang, Hao
    [J]. MEASUREMENT, 2021, 182 (182)
  • [10] Manipulator grabbing position detection with information fusion of color image and depth image using deep learning
    Jiang, Du
    Li, Gongfa
    Sun, Ying
    Hu, Jiabing
    Yun, Juntong
    Liu, Ying
    [J]. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 12 (12) : 10809 - 10822