NG-Net: No-Grasp annotation grasp detection network for stacked scenes

被引:2
作者
Shi, Min [1 ]
Hou, Jingzhao [1 ]
Li, Zhaoxin [2 ]
Zhu, Dengming [3 ]
机构
[1] North China Elect Power Univ, Sch Control & Comp Engn, 2 Beinong Rd, Beijing 102206, Peoples R China
[2] Chinese Acad Agr Sci, Agr Informat Inst, 12 Zhongguancun South St, Beijing 100081, Peoples R China
[3] Chinese Acad Sci, Inst Comp Technol, 6 Sci Acad South Rd, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Grasp detection; No-Grasp annotation; Stacked scenes; Robotic grasping;
D O I
10.1007/s10845-024-02321-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Achieving a high grasping success rate in a stacked environment is the core of the robot's grasping task. Most methods achieve a high grasping success rate by training the network on a dataset containing a large number of grasping annotations which requires a lot of manpower and material resources. Therefore, achieving a high grasping success rate for stacked scenes without grasping annotations is a challenging task. To address this, we propose a No-Grasp annotation grasp detection network for stacked scenes (NG-Net). Our network consists of two modules: an object selection module and a grasp generation module. Specifically, the object selection module performs instance segmentation on the raw point cloud to select the object with the highest score as the object to be grasped, and the grasp generation module uses mathematical methods to analyze the geometric features of the point cloud surface to achieve grasping pose generation without grasping annotations. Experiments show that on the modified IPA-Binpicking dataset G, NG-Net has an average grasp success rate of 97% in the stacked scene grasp experiment, 14-22% higher than PointNetGPD.
引用
收藏
页码:1477 / 1490
页数:14
相关论文
共 27 条
[1]  
Chiu Y.C., 2013, ENG AGR ENV FOOD, V6, P92, DOI [10.1016/S1881-8366(13)80017-1, DOI 10.1016/S1881-8366(13)80017-1]
[2]   Scoring Graspability based on Grasp Regression for Better Grasp Prediction [J].
Depierre, Amaury ;
Dellandrea, Emmanuel ;
Chen, Liming .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :4370-4376
[3]   Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review [J].
Du, Guoguang ;
Wang, Kai ;
Lian, Shiguo ;
Zhao, Kaiyong .
ARTIFICIAL INTELLIGENCE REVIEW, 2021, 54 (03) :1677-1734
[4]   AnyGrasp: Robust and Efficient Grasp Perception in Spatial and Temporal Domains [J].
Fang, Hao-Shu ;
Wang, Chenxi ;
Fang, Hongjie ;
Gou, Minghao ;
Liu, Jirong ;
Yan, Hengxu ;
Liu, Wenhai ;
Xie, Yichen ;
Lu, Cewu .
IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (05) :3929-3945
[5]   GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping [J].
Fang, Hao-Shu ;
Wang, Chenxi ;
Gou, Minghao ;
Lu, Cewu .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11441-11450
[6]   A new differentiable architecture search method for optimizing convolutional neural networks in the digital twin of intelligent robotic grasping [J].
Hu, Weifei ;
Shao, Jinyi ;
Jiao, Qing ;
Wang, Chuxuan ;
Cheng, Jin ;
Liu, Zhenyu ;
Tan, Jianrong .
JOURNAL OF INTELLIGENT MANUFACTURING, 2023, 34 (07) :2943-2961
[7]   Real-time grasping strategies using event camera [J].
Huang, Xiaoqian ;
Halwani, Mohamad ;
Muthusamy, Rajkumar ;
Ayyad, Abdulla ;
Swart, Dewald ;
Seneviratne, Lakmal ;
Gan, Dongming ;
Zweiri, Yahya .
JOURNAL OF INTELLIGENT MANUFACTURING, 2022, 33 (02) :593-615
[8]  
Jiang Y, 2011, IEEE INT CONF ROBOT
[9]  
Kaynar F., 2023, ARXIV
[10]   Transferring Experience from Simulation to the Real World for Precise Pick-And-Place Tasks in Highly Cluttered Scenes [J].
Kleebergcr, Kilian ;
Voelk, Markus ;
Moosmann, Marius ;
Thiessenhusen, Erik ;
Roth, Florian ;
Bormann, Richard ;
Huber, Marco F. .
2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, :9681-9688