Graph and dynamics interpretation in robotic reinforcement learning task

被引:6
|
作者
Yao, Zonggui [1 ]
Yu, Jun [1 ]
Zhang, Jian [2 ]
He, Wei [3 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci & Technol, Key Lab Complex Syst Modeling & Simulat, Hangzhou 310018, Peoples R China
[2] Zhejiang Int Studies Univ, Sch Sci & Technol, Hangzhou 310012, Peoples R China
[3] Univ Sci & Technol Beijing, Sch Automat & Elect Engn, Beijing 100083, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph neural networks (GNNs); Dynamics estimations; Robotic controls; Robotic force transmission; Trajectory following; Reinforcement learning; GRADIENT;
D O I
10.1016/j.ins.2022.08.041
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Robot control tasks are typically solved by reinforcement learning approaches in a circular way of trial and learn. A recent trend of the research on robotic reinforcement learning is the employment of the deep learning methods. Existing deep learning methods achieve the control by training the approximation models of the dynamic function, value function or the policy function in the control algorithms. However, these methods usually handle the modeling from a statistical perspective without considering the physics characteristics of the robot's motion. One of the typical problems is the force transmission through different parts of a robot and the calculations of the robotic dynamics quantities are prone to be ignored. To this end, we propose to use the force transmission graph to interpret the force transmission mechanism obeyed by the motion of the robot and estimate the dynamics quantities of the robot's motion with a quadratic model. Following this concern, we propose a model-based reinforcement learning framework for robotic control in which the dynamic model comprises two components, i.e. the Graph Convolution Network (GCN) and the Two-Layer Perception (TLP) network. The GCN serves as a parameter estimator of the force transmission graph and a structural feature extractor. The TLP network approximates the quadratic model that should be able to estimate the dynamics quantities of the robot's motion. For this reason, the proposed framework is named as GCN of Dynamics estimation in Reinforcement Learning method (GDRL for short). The deployed method interprets the intrinsic mechanism of robotic force transmissions through robot limbs, therefore the model is highly interpretable. Experimental results show that GDRL can predict the gesture and location of the robot for the next move well such that the performance of our method surpasses that of the previous methods in robot control task in our task setting of multiple types of robots. We also designed to compare with the previous model-free methods in our task setting, and the results are outstanding, which is owing to the interpretation of the physical characteristics. (C) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页码:317 / 334
页数:18
相关论文
共 50 条
  • [41] Learning Task Decomposition and Exploration Shaping for Reinforcement Learning Agents
    Djurdjevic, Predrag
    Huber, Manfred
    2008 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), VOLS 1-6, 2008, : 365 - 372
  • [42] Comparison of multiple reinforcement learning and deep reinforcement learning methods for the task aimed at achieving the goal
    Parak R.
    Matousek R.
    Mendel, 2021, 27 (01) : 1 - 8
  • [43] Reinforcement learning algorithms for robotic navigation in dynamic environments
    Yen, GG
    Hickey, TW
    ISA TRANSACTIONS, 2004, 43 (02) : 217 - 230
  • [44] A robotic model of hippocampal reverse replay for reinforcement learning
    Whelan, Matthew T.
    Jimenez-Rodriguez, Alejandro
    Prescott, Tony J.
    Vasilaki, Eleni
    BIOINSPIRATION & BIOMIMETICS, 2023, 18 (01)
  • [45] The Robotic Arm Velocity Planning Based on Reinforcement Learning
    Hao-Hsuan Huang
    Chih-Kai Cheng
    Yi-Hung Chen
    Hung-Yin Tsai
    International Journal of Precision Engineering and Manufacturing, 2023, 24 : 1707 - 1721
  • [46] Reinforcement Learning Control for a Robotic Manipulator with Unknown Deadzone
    Li, Yanan
    Xiao, Shengtao
    Ge, Shuzhi Sam
    2014 11TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2014, : 593 - 598
  • [47] Recent Advances of Deep Robotic Affordance Learning: A Reinforcement Learning Perspective
    Yang, Xintong
    Ji, Ze
    Wu, Jing
    Lai, Yu-Kun
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 15 (03) : 1139 - 1149
  • [48] The Robotic Arm Velocity Planning Based on Reinforcement Learning
    Huang, Hao-Hsuan
    Cheng, Chih-Kai
    Chen, Yi-Hung
    Tsai, Hung-Yin
    INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING, 2023, 24 (09) : 1707 - 1721
  • [49] Linguistic Lyapunov reinforcement learning control for robotic manipulators
    Kumar, Abhishek
    Sharma, Rajneesh
    NEUROCOMPUTING, 2018, 272 : 84 - 95
  • [50] Robotic Depalletizing via Reinforcement Learning of a Pushing Policy
    Dimou, Argyris
    Kiatos, Marios
    Malassiotis, Sotiris
    SUPPLY CHAINS, PT I, ICSC 2024, 2025, 2110 : 105 - 117