Learning to Regrasp Using Visual-Tactile Representation-Based Reinforcement Learning

被引:0
|
作者
Zhang, Zhuangzhuang [1 ]
Sun, Han [1 ]
Zhou, Zhenning [1 ]
Wang, Yizhao [1 ]
Huang, Huang [2 ]
Zhang, Zhinan [1 ]
Cao, Qixin [1 ]
机构
[1] Shanghai Jiao Tong Univ, State Key Lab Mech Syst & Vibrat, Shanghai 200240, Peoples R China
[2] Beijing Inst Control Engn, Beijing 100191, Peoples R China
关键词
Visualization; Force; Grasping; Training; Representation learning; Tactile sensors; Feature extraction; Stability analysis; Optimization; Hardware; Reinforcement learning; representation learning; robotic regrasp; transfer learning; visual-tactile fusion; VISION; SENSOR;
D O I
10.1109/TIM.2024.3470030
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The open-loop grasp planner, which relies on vision, is prone to failure caused by calibration errors, visual occlusions, and other factors. Additionally, it cannot adapt the grasp pose and gripping force in real time, thereby increasing the risk of potential damage to unidentified objects. This work presents a multimodal regrasp control framework based on deep reinforcement learning (RL). Given a coarse initial grasp pose, the proposed regrasping policy efficiently optimizes grasp pose and gripping force by deeply fusing visual and high-resolution tactile data in a closed-loop fashion. To enhance the sample efficiency and generalization capability of the RL algorithm, this work leverages self-supervision to pretrain a visual-tactile representation model, which serves as a feature extraction network during RL policy training. The RL policy is trained purely in simulation and successfully deployed to a real-world environment via domain adaptation and domain randomization techniques. Extensive experimental results in simulation and real-world environments indicate that the robot guided by the regrasping policy is able to achieve gentle grasping of unknown objects with high success rates. Finally, the comparison results with the state-of-the-art algorithm also demonstrate the superiority of our algorithm.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] A Review of Offline Reinforcement Learning Based on Representation Learning
    Wang X.-S.
    Wang R.-R.
    Cheng Y.-H.
    Zidonghua Xuebao/Acta Automatica Sinica, 2024, 50 (06): : 1104 - 1128
  • [22] Robust Representation Learning by Clustering with Bisimulation Metrics for Visual Reinforcement Learning with Distractions
    Liu, Qiyuan
    Zhou, Qi
    Yang, Rui
    Wang, Jie
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 8843 - 8851
  • [23] Deep representation-based transfer learning for deep neural networks
    Yang, Tao
    Yu, Xia
    Ma, Ning
    Zhang, Yifu
    Li, Hongru
    KNOWLEDGE-BASED SYSTEMS, 2022, 253
  • [24] Multiple Kernel Learning for Representation-based Classification of Hyperspectral Images
    Qin, Yu
    Bian, Xiaoyong
    Sheng, Yuxia
    PROCEEDINGS OF THE 30TH CHINESE CONTROL AND DECISION CONFERENCE (2018 CCDC), 2018, : 3507 - 3512
  • [25] Multiple representation-based chemistry learning textbook of colloid topic
    Widiastari, K.
    Redhana, I. W.
    INTERNATIONAL CONFERENCE ON MATHEMATICS AND SCIENCE EDUCATION (ICMSCE) 2020, 2021, 1806
  • [26] Structural sparse representation-based semi-supervised learning and edge detection proposal for visual tracking
    Liujun Zhao
    Qingjie Zhao
    Hao Liu
    Peng Lv
    Dongbing Gu
    The Visual Computer, 2017, 33 : 1169 - 1184
  • [27] Structural sparse representation-based semi-supervised learning and edge detection proposal for visual tracking
    Zhao, Liujun
    Zhao, Qingjie
    Liu, Hao
    Lv, Peng
    Gu, Dongbing
    VISUAL COMPUTER, 2017, 33 (09): : 1169 - 1184
  • [28] Visual-tactile object recognition of a soft gripper based on faster Region-based Convolutional Neural Network and machining learning algorithm
    Jiao, Chenlei
    Lian, Binbin
    Wang, Zhe
    Song, Yimin
    Sun, Tao
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2020, 17 (05)
  • [29] Excavation Reinforcement Learning Using Geometric Representation
    Lu, Qingkai
    Zhu, Yifan
    Zhang, Liangjun
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (02) : 4472 - 4479
  • [30] Robotic grasp slip detection based on visual-tactile fusion
    Cui S.
    Wei J.
    Wang R.
    Wang S.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2020, 48 (01): : 98 - 102