Deep Q-Learning-Based Dynamic Management of a Robotic Cluster

被引:4
|
作者
Gautier, Paul [1 ]
Laurent, Johann [1 ]
Diguet, Jean-Philippe [2 ]
机构
[1] Univ Bretagne Sud, Lab STICC, UMR6285 CNRS, F-56100 Lorient, France
[2] IRL2010 CNRS, CROSSING, Adelaide, SA 5000, Australia
关键词
Task analysis; Robots; Drones; Resource management; Computational modeling; Robot kinematics; Servers; MRS; task distribution; robotic cluster; multi-agent systems; reinforcement learning; deep Q-learning; ALLOCATION; SYSTEMS;
D O I
10.1109/TASE.2022.3205651
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The ever-increasing demands for autonomy and precision have led to the development of heavily computational multi-robot system (MRS). However, numerous missions exclude the use of robotic cloud. Another solution is to use the robotic cluster to locally distribute the computational load. This complex distribution requires adaptability to come up with a dynamic and uncertain environment. Classical approaches are too limited to solve this problem, but recent advances in reinforcement learning and deep learning offer new opportunities. In this paper we propose a new Deep Q-Network (DQN) based approaches where the MRS learns to distribute tasks directly from experience. Since the problem complexity leads to a curse of dimensionality, we use two specific methods, a new branching architecture, called Branching Dueling Q-Network (BDQ), and our own optimized multi-agent solution and we compare them with classical Market-based approaches as well as with non-distributed and purely local solutions. Our study shows the relevancy of learning-based methods for task mapping and also highlight the BDQ architecture capacity to solve high dimensional state space problems. Note to Practitioners-A lot of applications in industry like area exploration and monitoring can be efficiently delegated to a group of small-size robots or autonomous vehicles with advantages like reliability and cost in respect of single-robot solutions. But autonomy requires high and increasing compute-intensive tasks such as computer-vision. On the other hand small robots have energy constraints, limited embedded computing capacities and usually restricted and/or unreliable communications that limit the use of cloud resources. An alternative solution to cope with this problem consists in sharing the computing resources of the group of robots. Previous work was a proof of concept limited to the parallelisation of a single specific task. In this paper we formalize a general method that allows the group of robots to learn on the field how to efficiently distribute tasks in order to optimize the execution time of a mission under energy constraint. We demonstrate the relevancy of our solution over market-based and non-distributed approaches by means of intensive simulations. This successful study is a necessary first step towards distribution and parallelisation of computation tasks over a robotic cluster. The next steps, not tested yet, will address hardware in the loop simulation and finally a real-life mission with a group of robots.
引用
收藏
页码:2503 / 2515
页数:13
相关论文
共 50 条
  • [21] Q-Learning-Based Dynamic Spectrum Access in Cognitive Industrial Internet of Things
    Feng Li
    Kwok-Yan Lam
    Zhengguo Sheng
    Xinggan Zhang
    Kanglian Zhao
    Li Wang
    Mobile Networks and Applications, 2018, 23 : 1636 - 1644
  • [22] QGeo: Q-Learning-Based Geographic Ad Hoc Routing Protocol for Unmanned Robotic Networks
    Jung, Woo-Sung
    Yim, Jinhyuk
    Ko, Young-Bae
    IEEE COMMUNICATIONS LETTERS, 2017, 21 (10) : 2258 - 2261
  • [23] Q-Learning-Based Model Predictive Control for Energy Management in Residential Aggregator
    Ojand, Kianoosh
    Dagdougui, Hanane
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2022, 19 (01) : 70 - 81
  • [24] Using a Reinforcement Q-Learning-Based Deep Neural Network for Playing Video Games
    Lin, Cheng-Jian
    Jhang, Jyun-Yu
    Lin, Hsueh-Yi
    Lee, Chin-Ling
    Young, Kuu-Young
    ELECTRONICS, 2019, 8 (10)
  • [25] Dynamic Resource Management in Next Generation Networks based on Deep Q Learning
    Aslan, Aysun
    Bal, Gulce
    Toker, Cenk
    2020 28TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2020,
  • [26] Q-Learning-based fuzzy energy management for fuel cell/supercapacitor HEV
    Tao, Jili
    Zhang, Ridong
    Qiao, Zhijun
    Ma, Longhua
    TRANSACTIONS OF THE INSTITUTE OF MEASUREMENT AND CONTROL, 2022, 44 (10) : 1939 - 1949
  • [27] Deep Q-Learning-Based Smart Scheduling of EVs for Demand Response in Smart Grids
    Chifu, Viorica Rozina
    Cioara, Tudor
    Pop, Cristina Bianca
    Rusu, Horia Gabriel
    Anghel, Ionut
    APPLIED SCIENCES-BASEL, 2024, 14 (04):
  • [28] Q-learning-based unmanned aerial vehicle path planning with dynamic obstacle avoidance
    Sonny, Amala
    Yeduri, Sreenivasa Reddy
    Cenkeramaddi, Linga Reddy
    APPLIED SOFT COMPUTING, 2023, 147
  • [29] Q-learning-based dynamic joint control of interference and transmission opportunities for cognitive radio
    Jang, Sung-Jeen
    Yoo, Sang-Jo
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2018,
  • [30] Q-learning-based dynamic joint control of interference and transmission opportunities for cognitive radio
    Sung-Jeen Jang
    Sang-Jo Yoo
    EURASIP Journal on Wireless Communications and Networking, 2018