Trajectory tracking control of an unmanned aerial vehicle with deep reinforcement learning for tasks inside the EAST

被引:2
作者
Yu, Chao [1 ,2 ]
Yang, Yang [1 ]
Cheng, Yong [1 ]
Wang, Zheng [3 ]
Shi, Mingming [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Plasma Phys, Hefei Inst Phys Sci, Hefei 230031, Peoples R China
[2] Univ Sci & Technol China, Hefei 230026, Peoples R China
[3] Hefei Univ Technol, Hefei, Peoples R China
关键词
EAST; UAV; Trajectory tracking; Remote handling; Deep reinforcement learning; CONCEPTUAL DESIGN; DEPLOYER;
D O I
10.1016/j.fusengdes.2023.113894
中图分类号
TL [原子能技术]; O571 [原子核物理学];
学科分类号
0827 ; 082701 ;
摘要
The robotic arms inside the EAST (Experimental Advanced Superconducting Tokamak) are bulky and slow, making them unable to efficiently complete remote handling tasks such as inspection and grasping. Miniature intelligent UAVs have the potential to assist in remote handling tasks. A key challenge is to achieve autonomous flight along a set trajectory within the EAST's vacuum vessel. This paper presents an autonomous UAV system with deep reinforcement learning for this purpose. The autonomous flight of a quadrotor UAV within the EAST was simulated using OpenAI Gym-style environment. To verify that the trained policy is transferable, we experimentally verified the trajectory tracking of UAVs along specific trajectories in real scenarios. The results show that our autonomous UAV system can complete trajectory-tracking flight tasks inside the EAST vacuum vessel.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Interval Observer-based Robust Trajectory Tracking Control for Quadrotor Unmanned Aerial Vehicle
    Kun Yan
    Jing-Rong Zhang
    Hai-Peng Ren
    International Journal of Control, Automation and Systems, 2024, 22 : 288 - 300
  • [22] Monte Carlo-based reinforcement learning control for unmanned aerial vehicle systems
    Wei, Qinglai
    Yang, Zesheng
    Su, Huaizhong
    Wang, Lijian
    NEUROCOMPUTING, 2022, 507 : 282 - 291
  • [23] Dual Deep Neural Networks for Improving Trajectory Tracking Control of Unmanned Surface Vehicle
    Sun, Wenli
    Gao, Xu
    Yu, Yanli
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 3441 - 3446
  • [24] Unmanned Aerial Vehicle Path Planning in Complex Dynamic Environments Based on Deep Reinforcement Learning
    Liu, Jiandong
    Luo, Wei
    Zhang, Guoqing
    Li, Ruihao
    MACHINES, 2025, 13 (02)
  • [25] Deep reinforcement learning with intrinsic curiosity module based trajectory tracking control for USV
    Wu, Chuanbo
    Yu, Wanneng
    Liao, Weiqiang
    Ou, Yanghangcheng
    OCEAN ENGINEERING, 2024, 308
  • [26] Task Offloading Strategy for Unmanned Aerial Vehicle Power Inspection Based on Deep Reinforcement Learning
    Zhuang, Wei
    Xing, Fanan
    Lu, Yuhang
    SENSORS, 2024, 24 (07)
  • [27] Trusted Geographic Routing Protocol Based on Deep Reinforcement Learning for Unmanned Aerial Vehicle Network
    Zhang Yanan
    Qiu Hongbing
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2022, 44 (12) : 4211 - 4217
  • [28] Optimal tracking control of flight trajectory for unmanned aerial vehicles
    Khan, Md Shehzad
    Su, Hao
    Tang, Gong-You
    2018 IEEE 27TH INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE), 2018, : 264 - 269
  • [29] Trajectory tracking control for a quadrotor unmanned aerial vehicle based on dynamic surface active disturbance rejection control
    Zhang, Yong
    Chen, Zengqiang
    Sun, Mingwei
    TRANSACTIONS OF THE INSTITUTE OF MEASUREMENT AND CONTROL, 2020, 42 (12) : 2198 - 2205
  • [30] Speed and heading control of an unmanned surface vehicle using deep reinforcement learning
    Wu, Ting
    Ye, Hui
    Xiang, Zhengrong
    Yang, Xiaofei
    2023 IEEE 12TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE, DDCLS, 2023, : 573 - 578