Vision-Based Robotic Grasping in Cluttered Scenes via Deep Reinforcement Learning

被引:0
|
作者
Meng, Jiaming [1 ]
Geng, Zongsheng [1 ]
Zhao, Dongdong [1 ]
Yan, Shi [1 ]
机构
[1] Lanzhou Univ, Sch Informat Sci & Engn, Lanzhou 730000, Gansu, Peoples R China
基金
中国国家自然科学基金;
关键词
OBJECTS;
D O I
10.1109/ICARM62033.2024.10715849
中图分类号
TH [机械、仪表工业];
学科分类号
0802 ;
摘要
The complicated and unpredictable robotic operating environment frequently results in a poor success rate or failure of robotic grasping. This article proposes a scene dispersion deep reinforcement learning (SDDRL) approach by using a dispersion degree of objects to improve the success rate of robotic grasping, in particular for unknown irregular objects in cluttered scenes. Specifically, an end-to-end dispersion degree reward generation network model is designed to evaluate the degree of dispersion of objects in the robotic workspace. Correspondingly, a grasping strategy generation network is developed by optimizing the dispersion degree of objects, which generates strategies for robotic grasping positions and angles, enabling the robot to grasp objects successfully and make the workspace more scattered, so that the efficiency of grasping can be improved greatly. Extensive experiments performed on the CoppeliaSim simulation environment show that the proposed SDDRL outperforms state-of-the-art methods.
引用
收藏
页码:765 / 770
页数:6
相关论文
共 50 条
  • [21] Active learning for vision-based robot grasping
    Salganicoff, M
    Ungar, LH
    Bajcsy, R
    MACHINE LEARNING, 1996, 23 (2-3) : 251 - 278
  • [22] Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping
    Yan, Mengyuan
    Li, Adrian
    Kalakrishnan, Mrinal
    Pastor, Peter
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 4804 - 4810
  • [23] An active stereo vision-based learning approach for robotic tracking, fixating and grasping control
    Xiao, NF
    Nahavandi, S
    IEEE ICIT' 02: 2002 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY, VOLS I AND II, PROCEEDINGS, 2002, : 584 - 587
  • [24] A Vision-based Robotic Grasping System Using Deep Learning for 3D Object Recognition and Pose Estimation
    Yu, Jincheng
    Weng, Kaijian
    Liang, Guoyuan
    Xie, Guanghan
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2013, : 1175 - 1180
  • [25] Vision-Based Robotic Grasping of Reels for Automatic Packaging Machines
    Comari, Simone
    Carricato, Marco
    APPLIED SCIENCES-BASEL, 2022, 12 (15):
  • [26] Learning vision-based robotic manipulation tasks sequentially in offline reinforcement learning settings
    Yadav, Sudhir Pratap
    Nagar, Rajendra
    Shah, Suril V.
    ROBOTICA, 2024, 42 (06) : 1715 - 1730
  • [27] UPG: 3D vision-based prediction framework for robotic grasping in multi-object scenes
    Li, Xiaohan
    Zhang, Xiaozhen
    Zhou, Xiang
    Chen, I-Ming
    KNOWLEDGE-BASED SYSTEMS, 2023, 270
  • [28] Fusion-Mask-RCNN: Visual robotic grasping in cluttered scenes
    Ge, Junyan
    Mao, Lingbo
    Shi, Jinlong
    Jiang, Yan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (07) : 20953 - 20973
  • [29] Fusion-Mask-RCNN: Visual robotic grasping in cluttered scenes
    Junyan Ge
    Lingbo Mao
    Jinlong Shi
    Yan Jiang
    Multimedia Tools and Applications, 2024, 83 : 20953 - 20973
  • [30] Online Evolution of Deep Convolutional Network for Vision-Based Reinforcement Learning
    Koutnik, Jan
    Schmidhuber, Juergen
    Gomez, Faustino
    FROM ANIMALS TO ANIMATS 13, 2014, 8575 : 260 - 269