A visual detection and grasping method based on deep learning

被引:0
作者
Sun, Xiantao [1 ]
Cheng, Wei [1 ]
Chen, Wenjie [1 ]
Fang, Xiaohan [1 ]
Chen, Weihai [2 ]
Yang, Yinming [1 ]
机构
[1] School of Electrical Engineering and Automation, Anhui University, Hefei
[2] School of Automation Science and Electrical Engineering, Beihang University, Beijing
来源
Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics | 2023年 / 49卷 / 10期
基金
中国国家自然科学基金;
关键词
deep learning; neural network; object detection; pose estimation; robotic grasping;
D O I
10.13700/j.bh.1001-5965.2022.0130
中图分类号
学科分类号
摘要
This paper proposes a deep learning based visual detection and grasping method to solve the problems of the existing robotic grasping systems, including high hardware costs, difficulty in adapting to different objects, and large harmful torques. The channel attention mechanism is used to enhance the ability of the network to extract image features, improving the effect of target detection in complex environments using the improved YOLOV3. It is found that the average recognition rate is increased by 0.32% compared with that before the improvement. In addition, to address the discreteness of estimated orientation angles, an embedded minimum area bounding rectangle (MABR) algorithm based on VGG-16 backbone network is proposed to estimate and optimize the grasping position and orientation. The average error between the improved predicted grasping angle and the actual angle of the target is less than 2.47°, significantly reducing the additional harmful torque applied by the two-finger gripper to the object in the grasping process. This study then builds a visual grasping system, using a UR5 robotic arm, a pneumatic two-finger robotic gripper, a Realsense D435 camera, and an ATI-Mini45 six-axis force/torque sensor. Experimental results show that the proposed method can effectively grasp and classify objects, with low requirements for hardware. It reduces the harmful torque by about 75%, thereby reducing damage to grasped objects, and showing a great application prospect. © 2023 Beijing University of Aeronautics and Astronautics (BUAA). All rights reserved.
引用
收藏
页码:2635 / 2644
页数:9
相关论文
共 23 条
  • [21] XIONG J L, ZHAO D., Two-stage grasping detection for robots based on RGB images, Journal of University of Science and Technology of China, 50, 1, pp. 1-10, (2020)
  • [22] TEKIN B, SINHA S N, FUA P., Real-time seamless single shot 6D object pose prediction, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 292-301, (2018)
  • [23] KUMRA S, KANAN C., Robotic grasp detection using deep convolutional neural networks, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 769-776, (2017)