Graph and dynamics interpretation in robotic reinforcement learning task

被引:6
|
作者
Yao, Zonggui [1 ]
Yu, Jun [1 ]
Zhang, Jian [2 ]
He, Wei [3 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp Sci & Technol, Key Lab Complex Syst Modeling & Simulat, Hangzhou 310018, Peoples R China
[2] Zhejiang Int Studies Univ, Sch Sci & Technol, Hangzhou 310012, Peoples R China
[3] Univ Sci & Technol Beijing, Sch Automat & Elect Engn, Beijing 100083, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph neural networks (GNNs); Dynamics estimations; Robotic controls; Robotic force transmission; Trajectory following; Reinforcement learning; GRADIENT;
D O I
10.1016/j.ins.2022.08.041
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Robot control tasks are typically solved by reinforcement learning approaches in a circular way of trial and learn. A recent trend of the research on robotic reinforcement learning is the employment of the deep learning methods. Existing deep learning methods achieve the control by training the approximation models of the dynamic function, value function or the policy function in the control algorithms. However, these methods usually handle the modeling from a statistical perspective without considering the physics characteristics of the robot's motion. One of the typical problems is the force transmission through different parts of a robot and the calculations of the robotic dynamics quantities are prone to be ignored. To this end, we propose to use the force transmission graph to interpret the force transmission mechanism obeyed by the motion of the robot and estimate the dynamics quantities of the robot's motion with a quadratic model. Following this concern, we propose a model-based reinforcement learning framework for robotic control in which the dynamic model comprises two components, i.e. the Graph Convolution Network (GCN) and the Two-Layer Perception (TLP) network. The GCN serves as a parameter estimator of the force transmission graph and a structural feature extractor. The TLP network approximates the quadratic model that should be able to estimate the dynamics quantities of the robot's motion. For this reason, the proposed framework is named as GCN of Dynamics estimation in Reinforcement Learning method (GDRL for short). The deployed method interprets the intrinsic mechanism of robotic force transmissions through robot limbs, therefore the model is highly interpretable. Experimental results show that GDRL can predict the gesture and location of the robot for the next move well such that the performance of our method surpasses that of the previous methods in robot control task in our task setting of multiple types of robots. We also designed to compare with the previous model-free methods in our task setting, and the results are outstanding, which is owing to the interpretation of the physical characteristics. (C) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页码:317 / 334
页数:18
相关论文
共 50 条
  • [31] GRAPH SIGNAL SAMPLING VIA REINFORCEMENT LEARNING
    Abramenko, Oleksii
    Jung, Alexander
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3077 - 3081
  • [32] Recommendations Based on Reinforcement Learning and Knowledge Graph
    Song, Wei
    Wang, Tichang
    Zhang, Zihan
    ADVANCES AND TRENDS IN ARTIFICIAL INTELLIGENCE. THEORY AND APPLICATIONS, IEA/AIE 2023, PT I, 2023, 13925 : 313 - 324
  • [33] Reinforcement Learning for Cognitive Radar Task Scheduling
    Gaafar, Mohamed
    Shaghaghi, Mandi
    Adve, Raviraj S.
    Ding, Zhen
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1653 - 1657
  • [34] Transferring task models in Reinforcement Learning agents
    Fachantidis, Anestis
    Partalas, Ioannis
    Tsoumakas, Grigorios
    Vlahavas, Ioannis
    NEUROCOMPUTING, 2013, 107 : 23 - 32
  • [35] Graph learning-based generation of abstractions for reinforcement learning
    Xue, Yuan
    Kudenko, Daniel
    Khosla, Megha
    NEURAL COMPUTING & APPLICATIONS, 2023,
  • [36] Reinforcement Learning in Dynamic Task Scheduling: A Review
    Shyalika C.
    Silva T.
    Karunananda A.
    SN Computer Science, 2020, 1 (6)
  • [37] Sparse multi-task reinforcement learning
    Calandriello, Daniele
    Lazaric, Alessandro
    Restelli, Marcello
    INTELLIGENZA ARTIFICIALE, 2015, 9 (01) : 5 - 20
  • [38] Exploration With Task Information for Meta Reinforcement Learning
    Jiang, Peng
    Song, Shiji
    Huang, Gao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 4033 - 4046
  • [39] Task-Agnostic Safety for Reinforcement Learning
    Rahman, Md Asifur
    Alqahtani, Sarra
    PROCEEDINGS OF THE 16TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2023, 2023, : 139 - 148
  • [40] Multi-Task Reinforcement Learning for Quadrotors
    Xing, Jiaxu
    Geles, Ismail
    Song, Yunlong
    Aljalbout, Elie
    Scaramuzza, Davide
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (03): : 2112 - 2119