Trajectory Design for UAV-Based Inspection System: A Deep Reinforcement Learning Approach

被引:4
作者
Zhang, Wei [1 ]
Yang, Dingcheng [1 ]
Wu, Fahui [1 ]
Xiao, Lin [1 ]
机构
[1] Nanchang Univ, Dept Elect Informat Engn Sch, Nanchang 330031, Jiangxi, Peoples R China
来源
2023 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS | 2023年
关键词
cellular-connected UAV; patro inspection; trajectory design; deep reforcement learning; CONNECTIVITY;
D O I
10.1109/ICCWORKSHOPS57953.2023.10283670
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we consider a cellular connection-based UAV cruise detection system, where UAV needs traverse multiple fixed cruise points for aerial monitorning while maintain a satisfactory communication connectivity with cellular networks. We aim to minimize the weighted sum of UAV mission completion time and expected communication interruption duration by jointly optimizing the crossing strategy and UAV flight trajectory. Specifically, leveraging the state-of-the-art DRL algorithm, we utilize discrete-time techniques to transform the optimization problem into a Markov decision process (MDP) and propose an architecture with actor-critic based twin-delayed deep deterministic policy gradient(TD3) algorithm for aerial monitoring trajectory design (TD3-AM). The algorithm deals with continuous control problems with infinite state and action spaces. UAV can directly interacts with the environment to learn movement strategies and make continuous action values. Simulation results show that the algorithm has better performance than the baseline methods.
引用
收藏
页码:1654 / 1659
页数:6
相关论文
共 50 条
  • [31] Trajectory and Communication Design for Cache- Enabled UAVs in Cellular Networks: A Deep Reinforcement Learning Approach
    Ji, Jiequ
    Zhu, Kun
    Cai, Lin
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (10) : 6190 - 6204
  • [32] Joint Subcarrier Allocation, Modulation Mode Selection, and Trajectory Design in a UAV-Based OFDMA Network
    Li, Shichao
    Zhang, Ning
    Chen, Hongbin
    Lin, Siyu
    Wu, Huici
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (09) : 2111 - 2115
  • [33] Three-dimensional deep reinforcement learning for trajectory and resource optimization in UAV communication systems
    He, Chunlong
    Xu, Jiaming
    Li, Xingquan
    Li, Zhukun
    PHYSICAL COMMUNICATION, 2024, 63
  • [34] Reinforcement Learning for Trajectory Design in Cache-enabled UAV-assisted Cellular Networks
    Xu, Hu
    Ji, Jiequ
    Zhu, Kun
    Wang, Ran
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 2238 - 2243
  • [35] Reinforcement Learning for Decentralized Trajectory Design in Cellular UAV Networks With Sense-and-Send Protocol
    Hu, Jingzhi
    Zhang, Hongliang
    Song, Lingyang
    IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (04) : 6177 - 6189
  • [36] A Novel Trajectory Design Approach for UAV Based on Finite Fourier Series
    Guo, Yijun
    Yin, Sixing
    Hao, Jianjun
    Du, Yu
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2020, 9 (05) : 671 - 674
  • [37] Multiagent Deep Reinforcement Learning for Wireless-Powered UAV Networks
    Oubbati, Omar Sami
    Lakas, Abderrahmane
    Guizani, Mohsen
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (17): : 16044 - 16059
  • [38] Energy Efficient Transmission Strategy for Mobile Edge Computing Network in UAV-Based Patrol Inspection System
    Yang, Dingcheng
    Wang, Jun
    Wu, Fahui
    Xiao, Lin
    Xu, Yu
    Zhang, Tiankui
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) : 5984 - 5998
  • [39] Optimal Design for Trajectory and Phase-Shift in UAV-Mounted-RIS Communications with Reinforcement Learning
    Sun, Tianjiao
    Yin, Sixing
    Li, Jiayue
    Li, Shufang
    2024 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR COMMUNICATION AND NETWORKING, ICMLCN 2024, 2024, : 101 - 106
  • [40] Multi-Agent Model-Based Reinforcement Learning for Trajectory Design and Power Control in UAV-Enabled Networks
    Zhou, Shiyang
    Cheng, Yufan
    Lei, Xia
    2022 3RD INFORMATION COMMUNICATION TECHNOLOGIES CONFERENCE (ICTC 2022), 2022, : 33 - 38