Deep Reinforcement Learning-Based Joint Routing and Capacity Optimization in an Aerial and Terrestrial Hybrid Wireless Network

被引:2
作者
Wang, Zhe [1 ]
Li, Hongxiang [1 ]
Knoblock, Eric J. [2 ]
Apaza, Rafael D. [1 ,2 ]
机构
[1] Univ Louisville, Dept Elect & Comp Engn, Louisville, KY 40292 USA
[2] NASA Glenn Res Ctr, Cleveland, OH 44135 USA
关键词
Delays; Routing; Optimization; Relays; Routing protocols; Vectors; Uplink; ATHN; packet routing; E2E delay; capacity; DRL-based algorithm; D3QN; HOC; INTERNET;
D O I
10.1109/ACCESS.2024.3430191
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As the airspace is experiencing an increasing number of low-altitude aircraft, the concept of spectrum sharing between aerial and terrestrial users emerges as a compelling solution to improve the spectrum utilization efficiency. In this paper, we consider a new Aerial and Terrestrial Hybrid Network (ATHN) comprising aerial vehicles (AVs), ground base stations (BSs), and terrestrial users (TUs). In this ATHN, AVs and BSs collaboratively form a multi-hop ad-hoc network with the objective of minimizing the average end-to-end (E2E) packet transmission delay. Meanwhile, the BSs and TUs form a terrestrial network aimed at maximizing the uplink and downlink sum capacity. Given the concept of spectrum sharing between aerial and terrestrial users in ATHN, we formulate a joint routing and capacity optimization (JRCO) problem, which is a multi-stage combinatorial problem subject to the curse of dimensionality. To address this problem, we propose a Deep Reinforcement Learning (DRL) based algorithm. Specifically, the Dueling Double Deep Q-Network (D3QN) structure is constructed to learn an optimal policy through trial and error. Extensive simulation results demonstrate the efficacy of our proposed solution.
引用
收藏
页码:132056 / 132069
页数:14
相关论文
共 36 条
[21]   Energy-Efficient Resource Allocation for UAV-Assisted Vehicular Networks With Spectrum Sharing [J].
Qi, Weijing ;
Song, Qingyang ;
Guo, Lei ;
Jamalipour, Abbas .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (07) :7691-7702
[22]   Networking Models in Flying Ad-Hoc Networks (FANETs): Concepts and Challenges [J].
Sahingoz, Ozgur Koray .
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2014, 74 (1-2) :513-527
[23]  
Schaul T, 2016, Arxiv, DOI arXiv:1511.05952
[24]  
Sharma Vishal, 2017, Concurrency and Computation: Practice and Experience, V29, DOI [10.1002/cpe.3931, 10.1002/cpe.3931]
[25]   Improvement and Performance Evaluation of GPSR-Based Routing Techniques for Vehicular Ad Hoc Networks [J].
Silva, Andrey ;
Reza, Niaz ;
Oliveira, Aurenice .
IEEE ACCESS, 2019, 7 :21722-21733
[26]  
Vey Q, 2014, LECT NOTES COMPUT SC, V8435, P81, DOI 10.1007/978-3-319-06644-8_8
[27]   Federated Deep Reinforcement Learning for Internet of Things With Decentralized Cooperative Edge Caching [J].
Wang, Xiaofei ;
Wang, Chenyang ;
Li, Xiuhua ;
Leung, Victor C. M. ;
Taleb, Tarik .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (10) :9441-9455
[28]  
Wang Z., 2022, P IEEE AIAA 41 DIG A, P1
[29]   Joint Spectrum Access and Power Control in Air-Air Communications - A Deep Reinforcement Learning Based Approach [J].
Wang, Zhe ;
Li, Hongxiang ;
Knoblock, Eric J. ;
Apaza, Rafael D. .
2021 IEEE/AIAA 40TH DIGITAL AVIONICS SYSTEMS CONFERENCE (DASC), 2021,
[30]  
Wang ZY, 2016, PR MACH LEARN RES, V48