A Multi-Object Grasping Detection Based on the Improvement of YOLOv3 Algorithm

被引:0
作者
Du, Kun [1 ,3 ]
Song, Jilai [2 ,3 ]
Wang, Xiaofeng [3 ]
Li, Xiang [1 ,3 ]
Lin, Jie [1 ]
机构
[1] Northeastern Univ, Fac Robot Sci & Engn, Shenyang 110000, Peoples R China
[2] Chinese Acad Sci, Shenyang Inst Automat, State Key Lab Robot, Shenyang 110000, Peoples R China
[3] Shenyang SIASUN Robot & Automat Co LTD, Shenyang 110000, Peoples R China
来源
PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020) | 2020年
基金
国家重点研发计划;
关键词
YOLOv3; Robotic grasping; Deep learning; Corner detection; Grasping position and pose detection;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
YOLOv3 has achieved good results in the field of object detection. In order to achieve multi-object grasping detection, the network structure has been improved. The improved YOLOv3 algorithm is applied to the object position and pose detection in robotic grasping, and a deep learning model is proposed to predict the robot's grasping position, which can detect the occurrence of multiple objects in real time and grasp them in order according to the semantic information. For the specific application scenario, the corresponding dataset is made, and a corner detection method based on YOLOv3 is proposed to grasping position and pose detection. Compared with the traditional corner detection method, this method has semantic information in its detected corner. In the scene, we first classify and locate the object, then detect the corner of the object, and filter the corner of the false detection through the positioning of the object, and design the corresponding algorithm to complete the corner of the missed detection, so that the accuracy of the corner detection is greatly improved, reaching 99% in the self-made dataset. Finally, the position information of the corner is used to calculate the centroid position of the object, that is, the grasping point of the object. The point cloud information is obtained by depth camera, and the grasping pose of the object is calculated. This method can greatly improve the accuracy of grasping detection in specific scenes.
引用
收藏
页码:1027 / 1033
页数:7
相关论文
共 20 条
[1]  
[Anonymous], 2013, P 9 ROB SCI SYST
[2]  
[Anonymous], 2011, ROBOTICS SCI SYSTEMS
[3]  
Everingham M., 2007, The pascal visual object classes challenge,(voc2007) results
[4]   Fast R-CNN [J].
Girshick, Ross .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1440-1448
[5]   Rich feature hierarchies for accurate object detection and semantic segmentation [J].
Girshick, Ross ;
Donahue, Jeff ;
Darrell, Trevor ;
Malik, Jitendra .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :580-587
[6]  
Guo D, 2016, IEEE INT CONF ROBOT, P2038, DOI 10.1109/ICRA.2016.7487351
[7]   Deep learning for detecting robotic grasps [J].
Lenz, Ian ;
Lee, Honglak ;
Saxena, Ashutosh .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2015, 34 (4-5) :705-724
[8]   Microsoft COCO: Common Objects in Context [J].
Lin, Tsung-Yi ;
Maire, Michael ;
Belongie, Serge ;
Hays, James ;
Perona, Pietro ;
Ramanan, Deva ;
Dollar, Piotr ;
Zitnick, C. Lawrence .
COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 :740-755
[9]   SSD: Single Shot MultiBox Detector [J].
Liu, Wei ;
Anguelov, Dragomir ;
Erhan, Dumitru ;
Szegedy, Christian ;
Reed, Scott ;
Fu, Cheng-Yang ;
Berg, Alexander C. .
COMPUTER VISION - ECCV 2016, PT I, 2016, 9905 :21-37
[10]  
Moravec H P., 1977, P INT JOINT C ART IN