A YOLO-GGCNN based grasping framework for mobile robots in unknown environments

被引:20
作者
Li, Zhen [1 ]
Xu, Benlian [1 ]
Wu, Di [2 ]
Zhao, Kang [3 ]
Chen, Siwen [4 ]
Lu, Mingli [5 ]
Cong, Jinliang [5 ]
机构
[1] Suzhou Univ Sci & Technol, Sch Elect & informat Engn, Suzhou 215009, Jiangsu, Peoples R China
[2] Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China
[3] Changshu Inst Technol, Sch Mech Engn, Suzhou 215500, Jiangsu, Peoples R China
[4] Nanjing Univ, Software Inst, Nanjing 210009, Jiangsu, Peoples R China
[5] Changshu Inst Technol, Sch Elect & Automat Engn, Suzhou 215500, Jiangsu, Peoples R China
关键词
Optimization scheduling; Grasping system; Visual recognition; Object localization; FEATURES; SLAM; DEEP;
D O I
10.1016/j.eswa.2023.119993
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To grasp a desirable object in an unknown environment, both accurate map construction and reliable visual recognition are prerequisites for the collaboration of mobile robots. We propose a multirobot cooperative simultaneous localization and mapping (SLAM)-based map fusion algorithm which is developed to construct a map of an unknown environment through multiple exploring robots with high maneuverability. The resulting map is then transmitted to the mobile robotic arm through the Robot Operating System (ROS) for the upcoming navigation step. In the grasping module, a two-step cascaded system, i.e., YOLOv4 and a generative grasping convolutional neural network (YOLO-GGCNN), is proposed to grasp any given object via the mobile robotic arm. The capture accuracy of our algorithm is 86.0%, and the detection time required for a single frame is 0.11 s. In the recognition module, the grasping prediction is finally converted into an information flow that controls the operation of the robotic arm. The proposed framework is validated through applications in various environments and comparisons with existing approaches. The results indicate that our approach allows the mobile robotic arm to selectively and accurately grasp objects of interest in unknown environments, and it is more reliable than other methods.
引用
收藏
页数:14
相关论文
共 56 条
  • [31] Li WY, 2020, 2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), P1808, DOI 10.1109/SSCI47803.2020.9308604
  • [32] Garbage Collection and Sorting with a Mobile Manipulator using Deep Learning and Whole-Body Control
    Liu, Jingyi
    Balatti, Pietro
    Ellis, Kirsty
    Hadjivelichkov, Denis
    Stoyanov, Danail
    Ajoudani, Arash
    Kanoulas, Dimitrios
    [J]. PROCEEDINGS OF THE 2020 IEEE-RAS 20TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS 2020), 2021, : 408 - 414
  • [33] Distinctive image features from scale-invariant keypoints
    Lowe, DG
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2004, 60 (02) : 91 - 110
  • [34] Globally consistent range scan alignment for environment mapping
    Lu, F
    Milios, E
    [J]. AUTONOMOUS ROBOTS, 1997, 4 (04) : 333 - 349
  • [35] A Novel Two-Step Registration Method for Remote Sensing Images Based on Deep and Local Features
    Ma, Wenping
    Zhang, Jun
    Wu, Yue
    Jiao, Licheng
    Zhu, Hao
    Zhao, Wei
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019, 57 (07): : 4834 - 4843
  • [36] Madsen O., 2015, Industrial Robot
  • [37] Montemerlo M, 2002, EIGHTEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-02)/FOURTEENTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE (IAAI-02), PROCEEDINGS, P593
  • [38] Morrison D, 2018, Arxiv, DOI arXiv:1804.05172
  • [39] Nieuwenhuisen M, 2013, IEEE INT CONF ROBOT, P2327, DOI 10.1109/ICRA.2013.6630892
  • [40] A Review on Cooperative Robotic Arms with Mobile or Drones Bases
    Ramalepa, Larona Pitso
    Jamisola, Rodrigo S., Jr.
    [J]. INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, 2021, 18 (04) : 536 - 555