UAV reinforcement learning control algorithm with demonstrations

被引:0
|
作者
Sun D. [1 ,2 ]
Gao D. [1 ,2 ]
Zheng J. [1 ,2 ]
Han P. [1 ]
机构
[1] National Space Science Center, Chinese Academy of Sciences, Beijing
[2] University of Chinese Academy of Sciences, Beijing
来源
Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics | 2023年 / 49卷 / 06期
关键词
autonomous control; demonstrations; learning systems; reinforcement learning; unmanned aerial vehicle;
D O I
10.13700/j.bh.1001-5965.2021.0466
中图分类号
学科分类号
摘要
The practical application of reinforcement learning (RL) in an unmanned aerial vehicle control is restricted by low learning efficiency. An algorithm integrating RL with imitation learning was proposed to improve the performance of autonomous flight control systems. By establishing new loss and value functions, demonstrations were included as supervisory signals to actor and critic networks updating. Two replay buffers were utilized to store demonstration data and the data generated by interacting with the environment respectively. The prioritized experience replay system enhances the use of high-quality data and may assess the ratio of experience data utilization while learning. Simulation results showed that the RL control algorithm with demonstrations quickly obtained high rewards in the early stage of training and it had higher rewards during the whole training process than the conventional RL algorithm. The control strategy obtained by the proposed algorithm had faster response speed and higher control precision. Demonstrations enhance both the performance of the algorithm and the learning efficiency of the unmanned aerial vehicle autonomous control system, which makes it easier to learn more effective control techniques. The addition of demonstrations expands experience data, and increases the stability of the algorithm, making the unmanned aerial vehicle autonomous control system robust to the setting of the reward function. © 2023 Beijing University of Aeronautics and Astronautics (BUAA). All rights reserved.
引用
收藏
页码:1424 / 1433
页数:9
相关论文
共 21 条
  • [1] SANTOSO F, GARRATT M A, ANAVATTI S G., State-of-the-art intelligent flight control systems in unmanned aerial vehicles, IEEE Transactions on Automation Science and Engineering, 15, 2, pp. 613-627, (2018)
  • [2] FAUST A, PALUNKO I, CRUZ P, Et al., Learning swing-free trajectories for UAVs with a suspended load, 2013 IEEE International Conference on Robotics and Automation, pp. 4902-4909, (2013)
  • [3] ZHANG B C, MAO Z L, LIU W Q, Et al., Geometric reinforcement learning for path planning of UAVs, Journal of Intelligent & Robotic Systems, 77, 2, pp. 391-409, (2015)
  • [4] KOCH W, MANCUSO R, WEST R, Et al., Reinforcement learning for UAV attitude control, ACM Transactions on Cyber-Physical Systems, 3, 2, pp. 1-21, (2019)
  • [5] HWANGBO J, SA I, SIEGWART R, Et al., Control of a quadrotor with reinforcement learning, IEEE Robotics and Automation Letters, 2, 4, pp. 2096-2103, (2017)
  • [6] PHAM H X, LA H M, FEIL-SEIFER D, Et al., Reinforcement learning for autonomous UAV navigation using function approximation, 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics, pp. 1-6, (2018)
  • [7] WANG D W, FAN T X, HAN T, Et al., A two-stage reinforcement learning approach for multi-UAV collision avoidance under imperfect sensing, IEEE Robotics and Automation Letters, 5, 2, pp. 3098-3105, (2020)
  • [8] ZENG Y, XU X L, JIN S, Et al., Simultaneous navigation and radio mapping for cellular-connected UAV with deep reinforcement learning, IEEE Transactions on Wireless Communications, 20, 7, pp. 4205-4220, (2021)
  • [9] EBRAHIMI D, SHARAFEDDINE S, HO P H, Et al., Autonomous UAV trajectory for localizing ground objects: A reinforcement learning approach, IEEE Transactions on Mobile Computing, 20, 4, pp. 1312-1324, (2021)
  • [10] ESCANDELL-MONTERO P, LORENTE D, MARTINEZ-MARTINEZ J M, Et al., Online fitted policy iteration based on extreme learning machines, Knowledge-Based Systems, 100, pp. 200-211, (2016)