Intelligent Spectrum and Airspace Resource Management for Urban Air Mobility Using Deep Reinforcement Learning

被引:0
|
作者
Apaza, Rafael D. [1 ,2 ]
Han, Ruixuan [2 ]
Li, Hongxiang [2 ]
Knoblock, Eric J. [1 ]
机构
[1] NASA Glenn Res Ctr, Cleveland, OH 44135 USA
[2] Univ Louisville, Dept Elect & Comp Engn, Louisville, KY 40292 USA
来源
IEEE ACCESS | 2024年 / 12卷
基金
美国国家航空航天局;
关键词
Aircraft; Air traffic control; Base stations; Uplink; Time-frequency analysis; Resource management; Radio spectrum management; Downlink; Signal to noise ratio; Artificial intelligence; Aeronautics; artificial intelligence; spectrum management; resource allocation; urban air mobility; wireless communications;
D O I
10.1109/ACCESS.2024.3492113
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In an era dominated by a surge in air travel and heightened reliance on efficient communication systems, there's a critical need to smartly allocate frequency resources for aviation communications to efficiently manage airspace operations. This is essential to ensure safe, smooth, and technologically advanced flight services. Over time, techniques for managing frequency resources and new radio technologies have evolved to cope with the increased demands on the system due to growing airspace activities. With the development of Urban Air Mobility (UAM) operations, a fresh challenge has emerged, further burdening the already limited aviation spectrum. There is a pressing need for a new approach to efficiently manage and utilize frequencies. This paper explores the application of Multi-agent Reinforcement Learning (MARL) technique to minimize aircraft mission completion time and enhance safety, all while dealing with the limitations of airspace and frequency resources. The proposed MARL approach utilizes the Value Decomposition Network (VDN) technique to optimize frequency use, flight time, and departure wait times by managing spectrum allocation, vehicle departure, and flight speed. To achieve the goal of minimizing mission completion time, the Markov Decision Process (MDP) is utilized. It takes into account factors like frequency channel availability, signal-to-interference-plus-noise power ratio, aircraft location, and flight status. In our investigation, we develop a case study scenario and assess the performance of the MARL technique through simulation in a hypothetical UAM scenario. The solution is evaluated against Q-Mixing (QMIX), Orthogonal Multiple Access and a Heuristic Greedy Algorithm.
引用
收藏
页码:164750 / 164766
页数:17
相关论文
共 50 条
  • [1] Deep Reinforcement Learning Assisted Spectrum Management in Cellular Based Urban Air Mobility
    Han, Ruixuan
    Li, Hongxiang
    Apaza, Rafael
    Knoblock, Eric
    Gasper, Michael
    IEEE WIRELESS COMMUNICATIONS, 2022, 29 (06) : 14 - 21
  • [2] Deep Reinforcement Learning for Intelligent Cloud Resource Management
    Zhou, Zhi
    Luo, Ke
    Chen, Xu
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [3] Intelligent Cloud Resource Management with Deep Reinforcement Learning
    Zhang, Yu
    Yao, Jianguo
    Guan, Haibing
    IEEE CLOUD COMPUTING, 2017, 4 (06): : 60 - 69
  • [4] Dynamic Spectrum Sharing in Cellular Based Urban Air Mobility via Deep Reinforcement Learning
    Han, Ruixuan
    Li, Hongxiang
    Knoblock, Eric J.
    Gasper, Michael R.
    Apaza, Rafael D.
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1332 - 1337
  • [5] Intelligent Cruise Guidance and Vehicle Resource Management With Deep Reinforcement Learning
    Sun, Guolin
    Liu, Kai
    Boateng, Gordon Owusu
    Liu, Guisong
    Jiang, Wei
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (05) : 3574 - 3585
  • [6] Multi-agent Deep Reinforcement Learning for Spectrum and Air Traffic Management in UAM with Resource Constraints
    Apaza, Rafael D.
    Li, Hongxiang
    Han, Ruixuan
    Knoblock, Eric
    2023 IEEE/AIAA 42ND DIGITAL AVIONICS SYSTEMS CONFERENCE, DASC, 2023,
  • [7] Intelligent Demand Response Resource Trading Using Deep Reinforcement Learning
    Zhang, Yufan
    Ai, Qian
    Li, Zhaoyu
    CSEE JOURNAL OF POWER AND ENERGY SYSTEMS, 2024, 10 (06): : 2621 - 2630
  • [8] Resource Management with Deep Reinforcement Learning
    Mao, Hongzi
    Alizadeh, Mohammad
    Menache, Ishai
    Kandula, Srikanth
    PROCEEDINGS OF THE 15TH ACM WORKSHOP ON HOT TOPICS IN NETWORKS (HOTNETS '16), 2016, : 50 - 56
  • [9] Intelligent Spectrum Sensing and Resource Allocation in Cognitive Networks via Deep Reinforcement Learning
    Nguyen, Dinh C.
    Love, David J.
    Brinton, Christopher G.
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 4603 - 4608
  • [10] Intelligent Dynamic Spectrum Access Using Deep Reinforcement Learning for VANETs
    Wang, Yonghua
    Li, Xueyang
    Wan, Pin
    Shao, Ruiyu
    IEEE SENSORS JOURNAL, 2021, 21 (14) : 15554 - 15563