Comparison of Model-Based and Model-Free Reinforcement Learning for Real-World Dexterous Robotic Manipulation Tasks

被引:3
|
作者
Valencia, David [1 ]
Jia, John [1 ]
Li, Raymond [1 ]
Hayashi, Alex [2 ]
Lecchi, Megan [2 ]
Terezakis, Reuel [2 ]
Gee, Trevor [1 ]
Liarokapis, Minas [2 ]
MacDonald, Bruce A. [1 ]
Williams, Henry [1 ]
机构
[1] Univ Auckland, Ctr Automat & Robot Engn Sci, Auckland, New Zealand
[2] Univ Auckland, New Dexter Res Grp, Auckland, New Zealand
关键词
D O I
10.1109/ICRA48891.2023.10160983
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Model Free Reinforcement Learning (MFRL) has shown significant promise for learning dexterous robotic manipulation tasks, at least in simulation. However, the high number of samples, as well as the long training times, prevent MFRL from scaling to complex real-world tasks. Model-Based Reinforcement Learning (MBRL) emerges as a potential solution that, in theory, can improve the data efficiency of MFRL approaches. This could drastically reduce the training time of MFRL, and increase the application of RL for real-world robotic tasks. This article presents a study on the feasibility of using the state-of-the-art MBRL to improve the training time for two real-world dexterous manipulation tasks. The evaluation is conducted on a real low-cost robot gripper where the predictive model and the control policy are learned from scratch. The results indicate that MBRL is capable of learning accurate models of the world, but does not show clear improvements in learning the control policy in the real world as prior literature suggests should be expected.
引用
收藏
页码:871 / 878
页数:8
相关论文
共 50 条
  • [1] Real-world dexterous object manipulation based deep reinforcement learning
    Yao, Qingfeng
    Wang, Jilong
    Yang, Shuyu
    arXiv, 2021,
  • [2] Solving Complex Manipulation Tasks with Model-Assisted Model-Free Reinforcement Learning
    Hu, Jianshu
    Weng, Paul
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 1299 - 1308
  • [3] Model-based and Model-free Reinforcement Learning for Visual Servoing
    Farahmand, Amir Massoud
    Shademan, Azad
    Jagersand, Martin
    Szepesvari, Csaba
    ICRA: 2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-7, 2009, : 4135 - 4142
  • [4] Sim-to-Real Model-Based and Model-Free Deep Reinforcement Learning for Tactile Pushing
    Yang, Max
    Lin, Yijiong
    Church, Alex
    Lloyd, John
    Zhang, Dandan
    Barton, David A. W.
    Lepora, Nathan F.
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (09) : 5480 - 5487
  • [5] Expert Initialized Hybrid Model-Based and Model-Free Reinforcement Learning
    Langaa, Jeppe
    Sloth, Christoffer
    2023 EUROPEAN CONTROL CONFERENCE, ECC, 2023,
  • [6] Model-Based and Model-Free Replay Mechanisms for Reinforcement Learning in Neurorobotics
    Massi, Elisa
    Barthelemy, Jeanne
    Mailly, Juliane
    Dromnelle, Remi
    Canitrot, Julien
    Poniatowski, Esther
    Girard, Benoit
    Khamassi, Mehdi
    FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [7] Hybrid control for combining model-based and model-free reinforcement learning
    Pinosky, Allison
    Abraham, Ian
    Broad, Alexander
    Argall, Brenna
    Murphey, Todd D.
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2023, 42 (06): : 337 - 355
  • [8] Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning
    Swazinna, Phillip
    Udluft, Steffen
    Hein, Daniel
    Runkler, Thomas
    IFAC PAPERSONLINE, 2022, 55 (15): : 19 - 26
  • [9] Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning
    Lutter, Michael
    Silberbauer, Johannes
    Watson, Joe
    Peters, Jan
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 4163 - 4170
  • [10] Intelligent Navigation of a Magnetic Microrobot with Model-Free Deep Reinforcement Learning in a Real-World Environment
    Salehi, Amar
    Hosseinpour, Soleiman
    Tabatabaei, Nasrollah
    Soltani Firouz, Mahmoud
    Yu, Tingting
    MICROMACHINES, 2024, 15 (01)