A novel energy-efficiency framework for UAV-assisted networks using adaptive deep reinforcement learning

被引:2
|
作者
Seerangan, Koteeswaran [1 ]
Nandagopal, Malarvizhi [2 ]
Govindaraju, Tamilmani [3 ]
Manogaran, Nalini [4 ]
Balusamy, Balamurugan [5 ]
Selvarajan, Shitharth [6 ,7 ]
机构
[1] SA Engn Coll Autonomous, Dept CSE AI&ML, Chennai 600077, Tamil Nadu, India
[2] Vel Tech Rangarajan Dr Sagunthala R&D Inst Sci & T, Sch Comp, Dept CSE, Chennai 600062, Tamil Nadu, India
[3] SRM Inst Sci & Technol, Dept Computat Intelligence, Chennai 603203, Tamil Nadu, India
[4] SA Engn Coll Autonomous, Dept CSE, Chennai 600077, Tamil Nadu, India
[5] Shiv Nadar Inst Eminence Univ, Greater Noida 201314, Uttar Pradesh, India
[6] Kebri Dehar Univ, Dept Comp Sci, Kebri Dehar 250, Ethiopia
[7] Leeds Beckett Univ, Sch Built Environm Engn & Comp, Leeds LS6 3QS, England
来源
SCIENTIFIC REPORTS | 2024年 / 14卷 / 01期
关键词
Unmanned aerial vehicles; Energy efficiency; Deep reinforcement learning; Novel loss function; Hybrid energy valley and hermit crab; DESIGN; IOT;
D O I
10.1038/s41598-024-71621-x
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In the air-to-ground transmissions, the lifespan of the network is based on the "unmanned aerial vehicle's (UAV)" life span because of the limited battery capacity. Thus, the enhancement of energy efficiency and the outage of the ground candidate's minimization are significant factors of the network functionality. UAV-aided transmission can highly enhance the spectrum efficacy and coverage. Because of their flexible deployment and the high maneuverability, the UAVs can be the best alternative for the situations where the "Internet of Things (IoT)" systems utilize more energy to attain the essential information rate, when they are far away from the terrestrial base station. Therefore, it is significant to win over the few troubles in the conventional UAV-aided efficiency approaches. Thus, this proposed work is aimed to design an innovative energy efficiency framework in the UAV-assisted network using a reinforcement learning mechanism. The energy efficiency optimization in the UAV offers better wireless coverage to the static and mobile ground user. Presently, reinforcement learning techniques effectively optimize the energy efficiency rate of the system by employing the 2D trajectory mechanism, which effectively removes the interference rate attained in the nearby UAV cells. The main objective of the recommended framework is to maximize the energy efficiency rate of the UAV network by performing the joint optimization using UAV 3D trajectory, with the energy utilized during interference accounting, and connected user counts. Hence, an efficient Adaptive Deep Reinforcement Learning with Novel Loss Function (ADRL-NLF) framework is designed to provide a better energy efficiency rate to the UAV network. Moreover, the parameter of ADRL is tuned using the Hybrid Energy Valley and Hermit Crab (HEVHC) algorithm. Various experimental observations are performed to observe the effectualness rate of the recommended energy efficiency model for UAV-based networks over the classical energy efficiency framework in UAV Networks.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Communication-enabled deep reinforcement learning to optimise energy-efficiency in UAV-assisted networks
    Omoniwa, Babatunji
    Galkin, Boris
    Dusparic, Ivana
    VEHICULAR COMMUNICATIONS, 2023, 43
  • [2] Optimizing Energy Efficiency in UAV-Assisted Networks Using Deep Reinforcement Learning
    Omoniwa, Babatunji
    Galkin, Boris
    Dusparic, Ivana
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (08) : 1590 - 1594
  • [3] Energy Consumption Modeling and Optimization of UAV-Assisted MEC Networks Using Deep Reinforcement Learning
    Yan, Ming
    Zhang, Litong
    Jiang, Wei
    Chan, Chien Aun
    Gygax, Andre F.
    Nirmalathas, Ampalavanapillai
    IEEE SENSORS JOURNAL, 2024, 24 (08) : 13629 - 13639
  • [4] Density-Aware Reinforcement Learning to Optimise Energy Efficiency in UAV-Assisted Networks
    Omoniwa, Babatunji
    Galkin, Boris
    Dusparic, Ivana
    2023 19TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS, WIMOB, 2023, : 267 - 273
  • [5] UAV-Assisted Wireless Energy and Data Transfer With Deep Reinforcement Learning
    Xiong, Zehui
    Zhang, Yang
    Lim, Wei Yang Bryan
    Kang, Jiawen
    Niyato, Dusit
    Leung, Cyril
    Miao, Chunyan
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2021, 7 (01) : 85 - 99
  • [6] Caching Placement Optimization in UAV-Assisted Cellular Networks: A Deep Reinforcement Learning-Based Framework
    Wang, Yun
    Fu, Shu
    Yao, Changhua
    Zhang, Haijun
    Yu, Fei Richard
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2023, 12 (08) : 1359 - 1363
  • [7] Deep Reinforcement Learning for UAV-Assisted Emergency Response
    Lee, Isabella
    Babu, Vignesh
    Caesar, Matthew
    Nicol, David
    PROCEEDINGS OF THE 17TH EAI INTERNATIONAL CONFERENCE ON MOBILE AND UBIQUITOUS SYSTEMS: COMPUTING, NETWORKING AND SERVICES (MOBIQUITOUS 2020), 2021, : 327 - 336
  • [8] Deep Reinforcement Learning for Fresh Data Collection in UAV-assisted IoT Networks
    Yi, Mengjie
    Wang, Xijun
    Liu, Juan
    Zhang, Yan
    Bai, Bo
    IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2020, : 716 - 721
  • [9] Deep Reinforcement Learning for Minimizing Age-of-Information in UAV-assisted Networks
    Abd-Elmagid, Mohamed A.
    Ferdowsi, Aidin
    Dhillon, Harpreet S.
    Saad, Walid
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [10] Deep Reinforcement Learning Based Resource Management in UAV-Assisted IoT Networks
    Munaye, Yirga Yayeh
    Juang, Rong-Terng
    Lin, Hsin-Piao
    Tarekegn, Getaneh Berie
    Lin, Ding-Bing
    APPLIED SCIENCES-BASEL, 2021, 11 (05): : 1 - 20