A visual detection and grasping method based on deep learning

被引:0
|
作者
Sun, Xiantao [1 ]
Cheng, Wei [1 ]
Chen, Wenjie [1 ]
Fang, Xiaohan [1 ]
Chen, Weihai [2 ]
Yang, Yinming [1 ]
机构
[1] School of Electrical Engineering and Automation, Anhui University, Hefei
[2] School of Automation Science and Electrical Engineering, Beihang University, Beijing
来源
Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics | 2023年 / 49卷 / 10期
基金
中国国家自然科学基金;
关键词
deep learning; neural network; object detection; pose estimation; robotic grasping;
D O I
10.13700/j.bh.1001-5965.2022.0130
中图分类号
学科分类号
摘要
This paper proposes a deep learning based visual detection and grasping method to solve the problems of the existing robotic grasping systems, including high hardware costs, difficulty in adapting to different objects, and large harmful torques. The channel attention mechanism is used to enhance the ability of the network to extract image features, improving the effect of target detection in complex environments using the improved YOLOV3. It is found that the average recognition rate is increased by 0.32% compared with that before the improvement. In addition, to address the discreteness of estimated orientation angles, an embedded minimum area bounding rectangle (MABR) algorithm based on VGG-16 backbone network is proposed to estimate and optimize the grasping position and orientation. The average error between the improved predicted grasping angle and the actual angle of the target is less than 2.47°, significantly reducing the additional harmful torque applied by the two-finger gripper to the object in the grasping process. This study then builds a visual grasping system, using a UR5 robotic arm, a pneumatic two-finger robotic gripper, a Realsense D435 camera, and an ATI-Mini45 six-axis force/torque sensor. Experimental results show that the proposed method can effectively grasp and classify objects, with low requirements for hardware. It reduces the harmful torque by about 75%, thereby reducing damage to grasped objects, and showing a great application prospect. © 2023 Beijing University of Aeronautics and Astronautics (BUAA). All rights reserved.
引用
收藏
页码:2635 / 2644
页数:9
相关论文
共 23 条
  • [1] DU G G, WANG K, LIAN S G, Et al., Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: A review, Artificial Intelligence Review, 54, 3, pp. 1677-1734, (2021)
  • [2] ZHAI J M, DONG P F, ZHANG T., Positioning and grasping system design of industrial robot based on visual guidance, Machine Design & Research, 30, 5, pp. 45-49, (2014)
  • [3] WEI H, PAN S C, MA G, Et al., Vision-guided hand–eye coordination for robotic grasping and its application in tangram puzzles, AI, 2, 2, pp. 209-228, (2021)
  • [4] MALLICK A, DEL POBIL A P, CERVERA E., Deep learning based object recognition for robot picking task, Proceedings of the 12th International Conference on Ubiquitous Information Management and Communication, pp. 1-9, (2018)
  • [5] BAI C C, YAN Z, SONG J L., Visual grasp control of robotic arm based on deep learning, Manned Spaceflight, 24, 3, pp. 299-307, (2018)
  • [6] HUANG Y M, YI Y., Robot object detection and localization based on deep learning, Computer Engineering and Applications, 56, 24, pp. 181-187, (2020)
  • [7] JIANG Y, MOSESON S, SAXENA A., Efficient grasping from RGBD images: Learning using a new rectangle representation, 2011 IEEE International Conference on Robotics and Automation, pp. 3304-3311, (2011)
  • [8] CHU F J, XU R N, VELA P A., Real-world multiobject, multigrasp detection, IEEE Robotics and Automation Letters, 3, 4, pp. 3355-3362, (2018)
  • [9] XIA H Y, SUO S F, WANG Y, Et al., Object grasp detection algorithm based on improved Keypoint RCNN model, Chinese Journal of Scientific Instrument, 42, 4, pp. 236-246, (2021)
  • [10] ZHANG Z Y., Flexible camera calibration by viewing a plane from unknown orientations, Proceedings of the Seventh IEEE International Conference on Computer Vision, pp. 666-673, (2002)