Multiple Workpiece Grasping Point Localization Method Based on Deep Learning

被引:0
|
作者
An, Guanglin [1 ,2 ]
Li, Zonggang [1 ,2 ]
Du, Yajiang [1 ,2 ]
Kang, Huifeng [3 ]
机构
[1] Lanzhou Jiaotong Univ, Sch Mech & Elect Engn, Lanzhou 730070, Gansu, Peoples R China
[2] Lanzhou Jiaotong Univ, Robot Res Inst, Lanzhou 730070, Gansu, Peoples R China
[3] North China Inst Aerosp Engn, Coll Aerosp Engn, Langfang 065000, Hebei, Peoples R China
关键词
machine vision; workpiece inspection; YOLOv5 rotation inspection; Ghost bottleneck; attention mechanism;
D O I
10.3788/LOP220857
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this study, a multi-workpiece grasping point location method based on collaborative depth learning is proposed to solve the problems of disorderly placement and mutual occlusion of multiple workpieces in industrial production lines, such as missing inspection, wrong inspection, and difficult grasping point location. First, YOLOv5 is used as the basic network, and a data preprocessing module is added at the input end for angle transformation during image enhancement. Subsequently, a feature thinning network is added to the detection layer to realize the recognition and positioning of rotating workpieces via rotating anchor frames, and a lightweight Ghost bottleneck module is used to replace the bottleneckCSP module in the backbone network to eliminate the increased time cost due to the secondary positioning of the rotating anchor frames. Additionally, the fused feature maps are inputted into the attention mechanism module to obtain the key features of the workpiece. Subsequently, the image is clipped based on each workpiece detection frame, and the multi-workpiece detection is approximately transformed into single workpiece detection. Finally, the center of mass of the workpiece is obtained, and the grasping point is determined by combining the rotation angles of the detection frame. The experimental results show that the proposed method effectively solves the problem of locating the grab points of multiple workpieces close to or occluding each other. Furthermore, the method has higher detection speed and accuracy, which guarantees the real-time performance of multi-workpiece detection in industrial scenes.
引用
收藏
页数:11
相关论文
共 23 条
  • [1] Rich feature hierarchies for accurate object detection and semantic segmentation
    Girshick, Ross
    Donahue, Jeff
    Darrell, Trevor
    Malik, Jitendra
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 580 - 587
  • [2] GhostNet: More Features from Cheap Operations
    Han, Kai
    Wang, Yunhe
    Tian, Qi
    Guo, Jianyuan
    Xu, Chunjing
    Xu, Chang
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1577 - 1586
  • [3] Multi-Objective Detection of Traffic Scenes Based on Improved SSD
    Hua Xia
    Wang Xinqing
    Wang Dong
    Ma Zhaoye
    Shao Faming
    [J]. ACTA OPTICA SINICA, 2018, 38 (12)
  • [4] Lei Y., 2020, RES WORKPIECE DETECT
  • [5] Li D, 2013, 2013 6TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING (CISP), VOLS 1-3, P256, DOI 10.1109/CISP.2013.6743997
  • [6] Selective Kernel Networks
    Li, Xiang
    Wang, Wenhai
    Hu, Xiaolin
    Yang, Jian
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 510 - 519
  • [7] Infrared Target Imaging Liquid Level Detection Method Based on Deep Learning
    Liang Xiao
    Li Jiawei
    Zhao Xiaolong
    Zang Junbin
    Zhang Zhidong
    Xue Chenyang
    [J]. ACTA OPTICA SINICA, 2021, 41 (21)
  • [8] SSD: Single Shot MultiBox Detector
    Liu, Wei
    Anguelov, Dragomir
    Erhan, Dumitru
    Szegedy, Christian
    Reed, Scott
    Fu, Cheng-Yang
    Berg, Alexander C.
    [J]. COMPUTER VISION - ECCV 2016, PT I, 2016, 9905 : 21 - 37
  • [9] Arbitrary-Oriented Scene Text Detection via Rotation Proposals
    Ma, Jianqi
    Shao, Weiyuan
    Ye, Hao
    Wang, Li
    Wang, Hong
    Zheng, Yingbin
    Xue, Xiangyang
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2018, 20 (11) : 3111 - 3122
  • [10] Redmon J., 2018, PREPRINT, DOI [DOI 10.48550/ARXIV.1804.02767, 10.48550/arXiv.1804.02767]