Joint path planning and power allocation of a cellular-connected UAV using apprenticeship learning via deep inverse reinforcement learning

被引:3
作者
Shamsoshoara, Alireza [1 ,2 ]
Lotfi, Fatemeh [2 ]
Mousavi, Sajad [2 ,3 ]
Afghah, Fatemeh [2 ]
Guevenc, Ismail [2 ,4 ]
机构
[1] No Arizona Univ, Sch Informat Comp & Cyber Syst, Flagstaff, AZ 86011 USA
[2] Clemson Univ, Dept Elect & Comp Engn, Clemson, SC 29634 USA
[3] Harvard Med Sch, Boston, MA USA
[4] North Carolina State Univ, Raleigh, NC USA
基金
美国国家科学基金会;
关键词
Apprenticeship learning; Cellular-connected drones; Inverse reinforcement learning; Path planning; UAV communication; COMMUNICATION; NETWORKS; ALTITUDE;
D O I
10.1016/j.comnet.2024.110789
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
This paper investigates an interference-aware joint path planning and power allocation mechanism for a cellular-connected unmanned aerial vehicle (UAV) in a sparse suburban environment. The UAV's goal is to fly from an initial point and reach a destination point by moving along the cells to guarantee the required quality of service (QoS). In particular, the UAV aims to maximize its uplink throughput and minimize interference to the ground user equipment (UEs) connected to neighboring cellular base stations (BSs), considering both the shortest path and limitations on flight resources. Expert knowledge is used to experience the scenario and define the desired behavior for the sake of the agent (i.e., UAV) training. To solve the problem, an apprenticeship learning method is utilized via inverse reinforcement learning (IRL) based on both Q-learning and deep reinforcement learning (DRL). The performance of this method is compared to learning from a demonstration technique called behavioral cloning (BC) using a supervised learning approach. Simulation and numerical results show that the proposed approach can achieve expert-level performance. We also demonstrate that, unlike the BC technique, the performance of our proposed approach does not degrade in unseen situations.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] A Deep Reinforcement Learning Approach for Federated Learning Optimization with UAV Trajectory Planning
    Zhang, Chunyu
    Liu, Yiming
    Zhang, Zhi
    2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,
  • [42] Collision-free path planning for welding manipulator via hybrid algorithm of deep reinforcement learning and inverse kinematics
    Jie Zhong
    Tao Wang
    Lianglun Cheng
    Complex & Intelligent Systems, 2022, 8 : 1899 - 1912
  • [43] Collision-free path planning for welding manipulator via hybrid algorithm of deep reinforcement learning and inverse kinematics
    Zhong, Jie
    Wang, Tao
    Cheng, Lianglun
    COMPLEX & INTELLIGENT SYSTEMS, 2022, 8 (03) : 1899 - 1912
  • [44] A Joint Inverse Reinforcement Learning and Deep Learning Model for Drivers' Behavioral Prediction
    Wu, Guojun
    Li, Yanhua
    Luo, Shikai
    Song, Ge
    Wang, Qichao
    He, Jing
    Ye, Jieping
    Qie, Xiaohu
    Zhu, Hongtu
    CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 2805 - 2812
  • [45] A Mapless Local Path Planning Approach Using Deep Reinforcement Learning Framework
    Yin, Yan
    Chen, Zhiyu
    Liu, Gang
    Guo, Jianwei
    SENSORS, 2023, 23 (04)
  • [46] Expert-Trajectory-Based Features for Apprenticeship Learning via Inverse Reinforcement Learning for Robotic Manipulation
    Naranjo-Campos, Francisco J.
    Victores, Juan G.
    Balaguer, Carlos
    APPLIED SCIENCES-BASEL, 2024, 14 (23):
  • [47] Joint Multi-UAV Deployment and Resource Allocation based on Personalized Federated Deep Reinforcement Learning
    Xu, Xinyi
    Feng, Gang
    Qin, Shuang
    Liu, Yijing
    Sun, Yao
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5677 - 5682
  • [48] Deep Reinforcement Learning for Trajectory Path Planning and Distributed Inference in Resource-Constrained UAV Swarms
    Dhuheir, Marwan
    Baccour, Emna
    Erbad, Aiman
    Al-Obaidi, Sinan Sabeeh
    Hamdi, Mounir
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (09) : 8185 - 8201
  • [49] Towards Real-Time Path Planning through Deep Reinforcement Learning for a UAV in Dynamic Environments
    Chao Yan
    Xiaojia Xiang
    Chang Wang
    Journal of Intelligent & Robotic Systems, 2020, 98 : 297 - 309
  • [50] Deep Reinforcement Learning-Driven UAV Data Collection Path Planning: A Study on Minimizing AoI
    Huang, Hesong
    Li, Yang
    Song, Ge
    Gai, Wendong
    ELECTRONICS, 2024, 13 (10)