Attention-based efficient robot grasp detection network

被引:3
作者
Qin, Xiaofei [1 ]
Hu, Wenkai [1 ]
Xiao, Chen [2 ]
He, Changxiang [2 ]
Pei, Songwen [1 ,3 ,4 ]
Zhang, Xuedian [1 ,3 ,4 ,5 ]
机构
[1] Univ Shanghai Sci & Technol, Sch Opt Elect & Comp Engn, Shanghai 200093, Peoples R China
[2] Univ Shanghai Sci & Technol, Coll Sci, Shanghai 200093, Peoples R China
[3] Shanghai Key Lab Modern Opt Syst, Shanghai 200093, Peoples R China
[4] Minist Educ, Key Lab Biomed Opt Technol & Devices, Shanghai 200093, Peoples R China
[5] Tongji Univ, Shanghai Inst Intelligent Sci & Technol, Shanghai 201210, Peoples R China
关键词
Robot grasp detection; Attention mechanism; Encoder-decoder; Neural network;
D O I
10.1631/FITEE.2200502
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To balance the inference speed and detection accuracy of a grasp detection algorithm, which are both important for robot grasping tasks, we propose an encoder-decoder structured pixel-level grasp detection neural network named the attention-based efficient robot grasp detection network (AE-GDN). Three spatial attention modules are introduced in the encoder stages to enhance the detailed information, and three channel attention modules are introduced in the decoder stages to extract more semantic information. Several lightweight and efficient DenseBlocks are used to connect the encoder and decoder paths to improve the feature modeling capability of AE-GDN. A high intersection over union (IoU) value between the predicted grasp rectangle and the ground truth does not necessarily mean a high-quality grasp configuration, but might cause a collision. This is because traditional IoU loss calculation methods treat the center part of the predicted rectangle as having the same importance as the area around the grippers. We design a new IoU loss calculation method based on an hourglass box matching mechanism, which will create good correspondence between high IoUs and high-quality grasp configurations. AEGDN achieves the accuracy of 98.9% and 96.6% on the Cornell and Jacquard datasets, respectively. The inference speed reaches 43.5 frames per second with only about 1.2 x 10(6) parameters. The proposed AE-GDN has also been deployed on a practical robotic arm grasping system and performs grasping well.
引用
收藏
页码:1430 / 1444
页数:15
相关论文
共 40 条
  • [21] Kumra S, 2017, IEEE INT C INT ROBOT, P769, DOI 10.1109/IROS.2017.8202237
  • [22] Deep learning for detecting robotic grasps
    Lenz, Ian
    Lee, Honglak
    Saxena, Ashutosh
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2015, 34 (4-5) : 705 - 724
  • [23] Hybrid Robotic Grasping With a Soft Multimodal Gripper and a Deep Multistage Learning Scheme
    Liu, Fukang
    Sun, Fuchun
    Fang, Bin
    Li, Xiang
    Sun, Songyu
    Liu, Huaping
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (03) : 2379 - 2399
  • [24] Mahler J, 2017, ROBOTICS: SCIENCE AND SYSTEMS XIII
  • [25] Morrison D, 2018, ROBOTICS: SCIENCE AND SYSTEMS XIV
  • [26] Learning robust, real-time, reactive robotic grasping
    Morrison, Douglas
    Corke, Peter
    Leitner, Jurgen
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2020, 39 (2-3) : 183 - 201
  • [27] Park D, 2020, IEEE INT CONF ROBOT, P9397, DOI [10.1109/icra40945.2020.9197002, 10.1109/ICRA40945.2020.9197002]
  • [28] Pinto L, 2016, IEEE INT CONF ROBOT, P3406, DOI [arXiv:1509.06825, 10.1109/ICRA.2016.7487517]
  • [29] Quigley M, 2009, IEEE INT CONF ROBOT, P3604
  • [30] Redmon J, 2015, IEEE INT CONF ROBOT, P1316, DOI 10.1109/ICRA.2015.7139361