Optimal Frequency Reuse and Power Control in Multi-UAV Wireless Networks: Hierarchical Multi-Agent Reinforcement Learning Perspective

被引:10
作者
Lee, Seungmin [1 ,2 ]
Lim, Suhyeon [1 ,2 ]
Chae, Seong Ho [3 ]
Jung, Bang Chul [4 ]
Park, Chan Yi [5 ]
Lee, Howon [1 ,2 ]
机构
[1] Hankyong Natl Univ, Sch Elect & Elect Engn, Anseong 17579, South Korea
[2] Hankyong Natl Univ, Inst IT Convergence IITC, Anseong 17579, South Korea
[3] Tech Univ Korea, Dept Elect Engn, Siheung Si 15073, South Korea
[4] Chungnam Natl Univ, Dept Elect Engn, Daejeon 34134, South Korea
[5] Agcy Def Dev, Daejeon 34186, South Korea
关键词
Frequency conversion; Computer architecture; Time-frequency analysis; Microprocessors; Wireless networks; Q-learning; Autonomous aerial vehicles; Unmanned aerial vehicle; optimal frequency reuse; transmit power control; energy efficiency; hierarchical multi-agent Q-learning; multi-UAV wireless network; COVERAGE; ACCESS;
D O I
10.1109/ACCESS.2022.3166179
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To overcome the problems caused by the limited battery lifetime in multiple-unmanned aerial vehicle (UAV) wireless networks, we propose a hierarchical multi-agent reinforcement learning (RL) framework to maximize the energy efficiency (EE) of UAVs by finding the optimal frequency reuse factor and transmit power. The proposed algorithm consists of distributed inner-loop RL for transmit power control of the UAV terminal (UT) and centralized outer-loop RL for finding the optimal frequency reuse factor. Specifically, the proposed algorithm adjusts these two factors jointly to effectively mitigate intercell interference and reduce undesired transmit power consumption in multi-UAV wireless networks. We show that, for this reason, the proposed algorithm outperforms conventional algorithms, such as a random action algorithm with a fixed frequency reuse factor and a hierarchical multi-agent Q-learning algorithm with binary transmit power controls. Furthermore, even in the environment where UTs are continuously moving based on the mixed mobility model, we show that the proposed algorithm can find the best reward when compared to conventional algorithms.
引用
收藏
页码:39555 / 39565
页数:11
相关论文
共 19 条
[11]   Analytical Evaluation of Fractional Frequency Reuse for OFDMA Cellular Networks [J].
Novlan, Thomas David ;
Ganti, Radha Krishna ;
Ghosh, Arunabha ;
Andrews, Jeffrey G. .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2011, 10 (12) :4294-4305
[12]   Hierarchical Reinforcement-Learning for Real-Time Scheduling of Agile Satellites [J].
Ren, Lili ;
Ning, Xin ;
Li, Jiayin .
IEEE ACCESS, 2020, 8 :220523-220532
[13]   z Random 3D Mobile UAV Networks: Mobility Modeling and Coverage Probability [J].
Sharma, Pankaj K. ;
Kim, Dong In .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2019, 18 (05) :2527-2538
[14]   A Q-Learning Framework for User QoE Enhanced Self-Organizing Spectrally Efficient Network Using a Novel Inter-Operator Proximal Spectrum Sharing [J].
Srinivasan, Manikantan ;
Kotagi, Vijeth J. ;
Murthy, C. Siva Ram .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2016, 34 (11) :2887-2901
[15]   Deep Reinforcement Learning for Dynamic Multichannel Access in Wireless Networks [J].
Wang, Shangxing ;
Liu, Hanpeng ;
Gomes, Pedro Henrique ;
Krishnamachari, Bhaskar .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2018, 4 (02) :257-265
[16]   Unmanned Aerial Vehicle Base Station (UAV-BS) Deployment With Millimeter-Wave Beamforming [J].
Xiao, Zhenyu ;
Dong, Hang ;
Bai, Lin ;
Wu, Dapeng Oliver ;
Xia, Xiang-Gen .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (02) :1336-1349
[17]   What is 5G? Emerging 5G Mobile Services and Network Requirements [J].
Yu, Heejung ;
Lee, Howon ;
Jeon, Hongbeom .
SUSTAINABILITY, 2017, 9 (10)
[18]  
Zhang YL, 2014, 2014 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS (ICCS), P364, DOI 10.1109/ICCS.2014.7024826
[19]   Hierarchical Deep Reinforcement Learning for Backscattering Data Collection With Multiple UAVs [J].
Zhang, Yu ;
Mou, Zhiyu ;
Gao, Feifei ;
Xing, Ling ;
Jiang, Jing ;
Han, Zhu .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (05) :3786-3800