Using Reinforcement Learning to Reduce Energy Consumption of Ultra-Dense Networks With 5G Use Cases Requirements

被引:19
|
作者
Malta, Silvestre [1 ,4 ]
Pinto, Pedro [1 ,2 ,3 ]
Fernandez-Veiga, Manuel
机构
[1] Inst Politecn Viana do Castelo, Appl Digital Transformat Lab ADiT LAB, ADiT LAB, P-4900347 Viana Do Castelo, Portugal
[2] Inst Syst & Comp Engn Technol & Sci INESC TEC, P-4200465 Porto, Portugal
[3] Univ Maia, Dept Ciencias Comunicacao & Tecnol Informacao, P-4475690 Maia, Portugal
[4] Univ Vigo, AtlanTTic Res Ctr, Vigo 36310, Spain
关键词
5G mobile communication; Quality of service; Energy consumption; Energy efficiency; Power demand; Telecommunication traffic; Delays; Reinforcement learning; 5G; energy efficiency; sleep mode; reinforcement learning;
D O I
10.1109/ACCESS.2023.3236980
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In mobile networks, 5G Ultra-Dense Networks (UDNs) have emerged as they effectively increase the network capacity due to cell splitting and densification. A Base Station (BS) is a fixed transceiver that is the main communication point for one or more wireless mobile client devices. As UDNs are densely deployed, the number of BSs and communication links is dense, raising concerns about resource management with regard to energy efficiency, since BSs consume much of the total cost of energy in a cellular network. It is expected that 6G next-generation mobile networks will include technologies such as artificial intelligence as a service and focus on energy efficiency. Using machine learning it is possible to optimize energy consumption with cognitive management of dormant, inactive and active states of network elements. Reinforcement learning enables policies that allow sleep mode techniques to gradually deactivate or activate components of BSs and decrease BS energy consumption. In this work, a sleep mode management based on State Action Reward State Action (SARSA) is proposed, which allows the use of specific metrics to find the best tradeoff between energy reduction and Quality of Service (QoS) constraints. The results of the simulations show that, depending on the target of the 5G use case, in low traffic load scenarios and when a reduction in energy consumption is preferred over QoS, it is possible to achieve energy savings up to 80% with 50 ms latency, 75% with 20 ms and 10 ms latencies and 20% with 1 ms latency. If the QoS is preferred, then the energy savings reach a maximum of 5% with minimal impact in terms of latency.
引用
收藏
页码:5417 / 5428
页数:12
相关论文
共 50 条
  • [21] An efficient energy saving scheme using reinforcement learning for 5G and in H-CRAN
    Fourati, Hasna
    Maaloul, Rihab
    Trabelsi, Nessrine
    Chaari, Lamia
    Jmaiel, Mohamed
    AD HOC NETWORKS, 2024, 155
  • [22] Energy-Efficient Uplink Power Allocation in Ultra-Dense Network Through Multi-agent Reinforcement Learning
    Zhao, Yujie
    Peng, Tao
    Guo, Yichen
    Wang, Wenbo
    2021 IEEE 94TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-FALL), 2021,
  • [23] Reinforcement Learning based Adaptive Handover in Ultra-Dense Cellular Networks with Small Cells
    Liu, Qianyu
    Kwong, Chiew Foong
    Sun, Wei
    Li, Lincan
    Zhao, Haoyu
    INTERNATIONAL SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND ROBOTICS 2020, 2020, 11574
  • [24] Reinforcement Learning for Energy-Efficient 5G Massive MIMO: Intelligent Antenna Switching
    Hoffmann, Marcin
    Kryszkiewicz, Pawel
    IEEE ACCESS, 2021, 9 : 130329 - 130339
  • [25] Energy-Efficient Ultra-Dense Network Using LSTM-based Deep Neural Networks
    Kim, Seungnyun
    Son, Junwon
    Shim, Byonghyo
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (07) : 4702 - 4715
  • [26] Joint Optimization of Energy Efficiency and User Outage Using Multi-Agent Reinforcement Learning in Ultra-Dense Small Cell Networks
    Kim, Eunjin
    Jung, Bang Chul
    Park, Chan Yi
    Lee, Howon
    ELECTRONICS, 2022, 11 (04)
  • [27] Energy efficiency maximization oriented resource allocation in 5G ultra-dense network: Centralized and distributed algorithms
    Li, Wei
    Wang, Jun
    Yang, Guosheng
    Zuo, Yue
    Shao, Qijia
    Li, Shaoqian
    COMPUTER COMMUNICATIONS, 2018, 130 : 10 - 19
  • [28] Delay-Aware and Energy-Efficient Carrier Aggregation in 5G Using Double Deep Q-Networks
    Khoramnejad, Fahime
    Joda, Roghayeh
    Bin Sediq, Akram
    Abou-Zeid, Hatem
    Atawia, Ramy
    Boudreau, Gary
    Erol-Kantarci, Melike
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (10) : 6615 - 6629
  • [29] A hybrid game method for interference management with energy constraint in 5G ultra-dense HetNets
    Gu, Xin
    Zhang, Xiaoyong
    Cheng, Yijun
    Zhou, Zhuofu
    Peng, Jun
    JOURNAL OF COMPUTATIONAL SCIENCE, 2018, 26 : 354 - 362
  • [30] Reinforcement Learning for Traffic-Adaptive Sleep Mode Management in 5G Networks
    Masoudi, Meysam
    Khafagy, Mohammad Galal
    Soroush, Ebrahim
    Giacomelli, Daniele
    Morosi, Simone
    Cavdar, Cicek
    2020 IEEE 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (IEEE PIMRC), 2020,