Comparison of Model-Based and Model-Free Reinforcement Learning for Real-World Dexterous Robotic Manipulation Tasks

被引:3
|
作者
Valencia, David [1 ]
Jia, John [1 ]
Li, Raymond [1 ]
Hayashi, Alex [2 ]
Lecchi, Megan [2 ]
Terezakis, Reuel [2 ]
Gee, Trevor [1 ]
Liarokapis, Minas [2 ]
MacDonald, Bruce A. [1 ]
Williams, Henry [1 ]
机构
[1] Univ Auckland, Ctr Automat & Robot Engn Sci, Auckland, New Zealand
[2] Univ Auckland, New Dexter Res Grp, Auckland, New Zealand
关键词
D O I
10.1109/ICRA48891.2023.10160983
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Model Free Reinforcement Learning (MFRL) has shown significant promise for learning dexterous robotic manipulation tasks, at least in simulation. However, the high number of samples, as well as the long training times, prevent MFRL from scaling to complex real-world tasks. Model-Based Reinforcement Learning (MBRL) emerges as a potential solution that, in theory, can improve the data efficiency of MFRL approaches. This could drastically reduce the training time of MFRL, and increase the application of RL for real-world robotic tasks. This article presents a study on the feasibility of using the state-of-the-art MBRL to improve the training time for two real-world dexterous manipulation tasks. The evaluation is conducted on a real low-cost robot gripper where the predictive model and the control policy are learned from scratch. The results indicate that MBRL is capable of learning accurate models of the world, but does not show clear improvements in learning the control policy in the real world as prior literature suggests should be expected.
引用
收藏
页码:871 / 878
页数:8
相关论文
共 50 条
  • [21] MODEL-FREE ONLINE REINFORCEMENT LEARNING OF A ROBOTIC MANIPULATOR
    Sweafford, Jerry, Jr.
    Fahimi, Farbod
    MECHATRONIC SYSTEMS AND CONTROL, 2019, 47 (03): : 136 - 143
  • [22] Prosocial learning: Model-based or model-free?
    Navidi, Parisa
    Saeedpour, Sepehr
    Ershadmanesh, Sara
    Hossein, Mostafa Miandari
    Bahrami, Bahador
    PLOS ONE, 2023, 18 (06):
  • [23] Real-World Image Deraining Using Model-Free Unsupervised Learning
    Yu, Rongwei
    Xiang, Jingyi
    Shu, Ni
    Zhang, Peihao
    Li, Yizhan
    Shen, Yiyang
    Wang, Weiming
    Wang, Lina
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2024, 2024
  • [24] Predictive representations can link model-based reinforcement learning to model-free mechanisms
    Russek, Evan M.
    Momennejad, Ida
    Botvinick, Matthew M.
    Gershman, Samuel J.
    Daw, Nathaniel D.
    PLOS COMPUTATIONAL BIOLOGY, 2017, 13 (09)
  • [25] Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning
    Chebotar, Yevgen
    Hausman, Karol
    Zhang, Marvin
    Sukhatme, Gaurav
    Schaal, Stefan
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [26] Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task
    Skatova, Anya
    Chan, Patricia A.
    Daw, Nathaniel D.
    FRONTIERS IN HUMAN NEUROSCIENCE, 2013, 7
  • [27] Dyna-style Model-based reinforcement learning with Model-Free Policy Optimization
    Dong, Kun
    Luo, Yongle
    Wang, Yuxin
    Liu, Yu
    Qu, Chengeng
    Zhang, Qiang
    Cheng, Erkang
    Sun, Zhiyong
    Song, Bo
    KNOWLEDGE-BASED SYSTEMS, 2024, 287
  • [28] The modulation of acute stress on model-free and model-based reinforcement learning in gambling disorder
    Wyckmans, Florent
    Banerjee, Nilosmita
    Saeremans, Melanie
    Otto, Ross
    Kornreich, Charles
    Vanderijst, Laetitia
    Gruson, Damien
    Carbone, Vincenzo
    Bechara, Antoine
    Buchanan, Tony
    Noel, Xavier
    JOURNAL OF BEHAVIORAL ADDICTIONS, 2022, 11 (03) : 831 - 844
  • [29] Model-based decision making and model-free learning
    Drummond, Nicole
    Niv, Yael
    CURRENT BIOLOGY, 2020, 30 (15) : R860 - R865
  • [30] Model-Free and Model-Based Active Learning for Regression
    O'Neill, Jack
    Delany, Sarah Jane
    MacNamee, Brian
    ADVANCES IN COMPUTATIONAL INTELLIGENCE SYSTEMS, 2017, 513 : 375 - 386