Double Deep Q-Network Method for Energy Efficiency and Throughput in a UAV-Assisted Terrestrial Network

被引:6
|
作者
Ouamri M.A. [1 ,2 ]
Alkanhel R. [3 ]
Singh D. [4 ]
El-Kenaway E.-S.M. [5 ]
Ghoneim S.S.M. [6 ]
机构
[1] University Grenoble Alpes, CNRS, Grenoble INP, LIG, DRAKKAR Teams, Grenoble
[2] Laboratoire d'informatique Médical, Université de Bejaia, Targa Ouzemour
[3] Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O.Box 84428, Riyadh
[4] Department of Research and Development, Centre for Space Research, School of Electronics and Electrical Engineering, Lovely Professional University, Phagwara
[5] Department of Communication and Electronics, Delta Higher Institute of Engineering and Technology, Mansoura
[6] Electrical Engineering Department, College of Engineering, Taif University, P. O. BOX 11099, Taif
来源
关键词
mmWave; reinforcement learning; resource allocation; terrestrial network; UAV;
D O I
10.32604/csse.2023.034461
中图分类号
学科分类号
摘要
Increasing the coverage and capacity of cellular networks by deploying additional base stations is one of the fundamental objectives of fifth-generation (5G) networks. However, it leads to performance degradation and huge spectral consumption due to the massive densification of connected devices and simultaneous access demand. To meet these access conditions and improve Quality of Service, resource allocation (RA) should be carefully optimized. Traditionally, RA problems are nonconvex optimizations, which are performed using heuristic methods, such as genetic algorithm, particle swarm optimization, and simulated annealing. However, the application of these approaches remains computationally expensive and unattractive for dense cellular networks. Therefore, artificial intelligence algorithms are used to improve traditional RA mechanisms. Deep learning is a promising tool for addressing resource management problems in wireless communication. In this study, we investigate a double deep Q-network-based RA framework that maximizes energy efficiency (EE) and total network throughput in unmanned aerial vehicle (UAV)-assisted terrestrial networks. Specifically, the system is studied under the constraints of interference. However, the optimization problem is formulated as a mixed integer nonlinear program. Within this framework, we evaluated the effect of height and the number of UAVs on EE and throughput. Then, in accordance with the experimental results, we compare the proposed algorithm with several artificial intelligence methods. Simulation results indicate that the proposed approach can increase EE with a considerable throughput. © 2023 CRL Publishing. All rights reserved.
引用
收藏
页码:73 / 92
页数:19
相关论文
共 50 条
  • [31] Double deep Q-learning network-based path planning in UAV-assisted wireless powered NOMA communication networks
    Lei, Ming
    Fowler, Scott
    Wang, Juzhen
    Zhang, Xingjun
    Yu, Bocheng
    Yu, Bin
    2021 IEEE 94TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-FALL), 2021,
  • [32] Real-Time Optimization of Microgrid Energy Management Using Double Deep Q-Network
    Rokh, Shahram Bahman
    Zhang, Rui
    Ravishankar, Jayashri
    Saberi, Hossein
    Fletcher, John
    2023 IEEE POWER & ENERGY SOCIETY INNOVATIVE SMART GRID TECHNOLOGIES CONFERENCE, ISGT, 2023,
  • [33] Coordinated optimal dispatch of composite energy storage microgrid based on double deep Q-network
    Piao Z.
    Li T.
    Zhang B.
    Kou L.
    International Journal of Wireless and Mobile Computing, 2024, 26 (01) : 92 - 98
  • [34] Timeslot Scheduling with Reinforcement Learning Using a Double Deep Q-Network
    Ryu, Jihye
    Kwon, Juhyeok
    Ryoo, Jeong-Dong
    Cheung, Taesik
    Joung, Jinoo
    ELECTRONICS, 2023, 12 (04)
  • [35] Trajectory Design and Link Selection in UAV-Assisted Hybrid Satellite-Terrestrial Network
    Chen, Yu-Jia
    Chen, Wei
    Ku, Meng-Lin
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (07) : 1643 - 1647
  • [36] Deep Double Q-Network Based on Linear Dynamic Frame Skip
    Chen S.
    Zhang X.-F.
    Zhang Z.-Z.
    Liu Q.
    Wu J.-J.
    Yan Y.
    Jisuanji Xuebao/Chinese Journal of Computers, 2019, 42 (11): : 2561 - 2573
  • [37] Optimizing Energy Efficiency in UAV-Assisted Networks Using Deep Reinforcement Learning
    Omoniwa, Babatunji
    Galkin, Boris
    Dusparic, Ivana
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (08) : 1590 - 1594
  • [38] Effective defense strategies in network security using improved double dueling deep Q-network
    Zhu, Zhengwei
    Chen, Miaojie
    Zhu, Chenyang
    Zhu, Yanping
    COMPUTERS & SECURITY, 2024, 136
  • [39] Microgrid energy management using deep Q-network reinforcement learning
    Alabdullah, Mohammed H.
    Abido, Mohammad A.
    ALEXANDRIA ENGINEERING JOURNAL, 2022, 61 (11) : 9069 - 9078
  • [40] UAV Autonomous Navigation for Wireless Powered Data Collection with Onboard Deep Q-Network
    LI Yuting
    DING Yi
    GAO Jiangchuan
    LIU Yusha
    HU Jie
    YANG Kun
    ZTECommunications, 2023, 21 (02) : 80 - 87