Robotic Grasp Detection Using Structure Prior Attention and Multiscale Features

被引:7
作者
Chen, Lu [1 ,2 ]
Niu, Mingdi [2 ,3 ]
Yang, Jing [4 ]
Qian, Yuhua [1 ,2 ]
Li, Zhuomao [1 ,2 ]
Wang, Keqi [1 ,2 ]
Yan, Tao [1 ,2 ]
Huang, Panfeng [5 ]
机构
[1] Shanxi Univ, Inst Big Data Sci & Ind, Taiyuan 030006, Peoples R China
[2] Shanxi Univ, Sch Comp & Informat Technol, Taiyuan 030006, Peoples R China
[3] Xiamen Univ, Sch Aeronaut & Astronaut, Xiamen 361005, Peoples R China
[4] Shanxi Univ, Sch Automat & Software Engn, Taiyuan 030031, Peoples R China
[5] Northwestern Polytech Univ, Sch Astronaut, Res Ctr Intelligent Robot, Xian 710072, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2024年 / 54卷 / 11期
基金
中国国家自然科学基金;
关键词
Feature extraction; Grasping; Robots; Solid modeling; Point cloud compression; Accuracy; Fuses; Attention mechanism; deep neural network; grasp detection; robot; robotic grasping;
D O I
10.1109/TSMC.2024.3446841
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Most available grasp detection methods tend to directly predict grasp configurations with deep neural networks, where all features are equally extracted and utilized, leading to the relative restriction of truly useful grasping features. Inspired by the observed three-section structure pattern revealed by human-labeled graspable rectangles, we first design a structure prior attention (SPA) module which uses two-dimensional encoding to enhance the local patterns and utilizes self-attention mechanism to reallocate distribution of grasping-specific features. Then, the proposed SPA module is integrated with fundamental feature extraction modules and residual connection to achieve the implicit and explicit feature fusion, which further serves as the building block of our proposed Unet-like grasp detection network. It takes RGBD images as input and outputs image-size feature maps, from which the grasp configurations can be determined. Extensive comparative experiments on the five public datasets prove our method's superiority to other approaches in detection accuracy, achieving 99.2%, 96.1%, 98.0%, 86.7%, and 92.6% on the Cornell, Jacquard, Clutter, VMRD, and GraspNet datasets. With visual evaluation metrics and user study, the quality maps generated by our method possess more concentrative distribution of high-confidence grasps and clearer discrimination with backgrounds. In addition, its effectiveness is also verified by robotic grasping under real-world scenario, leading to higher success rate.
引用
收藏
页码:7039 / 7053
页数:15
相关论文
共 63 条
[1]   A 3D-grasp synthesis algorithm to grasp unknown objects based on graspable boundary and convex segments [J].
Ala, RajeshKanna ;
Kim, Dong Hwan ;
Shin, Sung Yul ;
Kim, ChangHwan ;
Park, Sung-Kee .
INFORMATION SCIENCES, 2015, 295 :91-106
[2]  
Asir U, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P4875
[3]  
Bochkovskiy A, 2020, PREPRINT, DOI 10.48550/ARXIV.2004.10934
[4]   Data-Driven Grasp Synthesis-A Survey [J].
Bohg, Jeannette ;
Morales, Antonio ;
Asfour, Tamim ;
Kragic, Danica .
IEEE TRANSACTIONS ON ROBOTICS, 2014, 30 (02) :289-309
[5]   Automated modeling and robotic grasping of unknown three-dimensional objects [J].
Bone, Gary M. ;
Lambert, Andrew ;
Edwards, Mark .
2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9, 2008, :292-+
[6]   EB-LG Module for 3D Point Cloud Classification and Segmentation [J].
Chen, Jintao ;
Zhang, Yan ;
Ma, Feifan ;
Tan, Zhuangbin .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (01) :160-167
[7]   Edge-Dependent Efficient Grasp Rectangle Search in Robotic Grasp Detection [J].
Chen, Lu ;
Huang, Panfeng ;
Li, Yuanhao ;
Meng, Zhongjie .
IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2021, 26 (06) :2922-2931
[8]   Convolutional multi-grasp detection using grasp path for RGBD images [J].
Chen, Lu ;
Huang, Panfeng ;
Meng, Zhongjie .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2019, 113 :94-103
[9]   Real-World Multiobject, Multigrasp Detection [J].
Chu, Fu-Jen ;
Xu, Ruinian ;
Vela, Patricio A. .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :3355-3362
[10]  
Ciocarlie M., 2014, EXPT ROBOTICS 12 INT, P241