Soft Gripper Grasping Based on Complete Grasp Configuration and Multi-Stage Network

被引:0
作者
Liu W. [1 ]
Hu J. [1 ]
Wang W. [1 ]
机构
[1] School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai
来源
Hu, Jie (hujie@sjtu.edu.cn) | 1600年 / Shanghai Jiaotong University卷 / 54期
关键词
Deep learning; Multi-task; Robotic grasping; Soft gripper;
D O I
10.16183/j.cnki.jsjtu.2020.05.008
中图分类号
学科分类号
摘要
Visual guided robotic grasping of soft gripper depends on correct grasp position, grasp angle and grasp depth, and therefore a complete grasp configuration model and a multi-task loss function for soft gripper are proposed. A two-stage deep learning network based on anchor and rotating blocks is designed to realize direct map from image to multi-gripper grasping. The performance of the network is analyzed by public cornell grasping dataset and self-built dataset. The results show that the two-stage network based on multi-task loss and anchor with rotated blocks improves the accuracy of multi-output grasp detection and increases the success rate of robotic grasping. Finally, the soft robotic grasping system is constructed and the robotic grasping experiment results show that the proposed method provides a certain robustness to vision error, achieves 96% grasp success rate at different fruits, and exhibits a good generalization ability to grasp fruit peel. © 2020, Shanghai Jiao Tong University Press. All right reserved.
引用
收藏
页码:507 / 514
页数:7
相关论文
共 13 条
  • [1] ZHANG Jinhua, WANG Tao, HONG Jun, Et al., Review of soft-bodied manipulator, Journal of Mechanical Engineering, 53, 13, pp. 19-28, (2017)
  • [2] SUNDERHAUF N, BROCK O, SCHEIRER W, Et al., The limits and potentials of deep learning for robotics, The International Journal of Robotics Research, 37, 4, pp. 405-420, (2018)
  • [3] LENZ I, LEE H, SAXENA A., Deep learning for detecting robotic grasps, The International Journal of Robotics Research, 34, 4, pp. 705-724, (2015)
  • [4] REDMON J, ANGELOVA A., Real-time grasp detection using convolutional neural networks, 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1316-1322, (2015)
  • [5] KUMRA S, KANAN C., Robotic grasp detection using deep convolutional neural networks, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 769-776, (2017)
  • [6] LIU W H, PAN Z Y, LIU W J, Et al., Deep learning for picking point detection in dense cluster, 2017 11th Asian Control Conference (ASCC), pp. 1644-1649, (2017)
  • [7] ZENG A, SONG S R, YU K T, Et al., Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching, 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1-8, (2018)
  • [8] PENG Yan, LIU Yonggan, YANG Yang, Et al., Research progress on application of soft robotic gripper in fruit and vegetable picking, Transactions of the Chinese Society of Agricultural Engineering, 34, 9, pp. 11-20, (2018)
  • [9] JIANG Y, MOSESON S, SAXENA A., Efficient grasping from RGBD images: Learning using a new rectangle representation, 2011 IEEE International Conference on Robotics and Automation, pp. 3304-3311, (2011)
  • [10] MAHLER J, LIANG J, NIYAZ S, Et al., Dex-Net 2. 0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics [DB/OL]