Using Reinforcement Learning to Reduce Energy Consumption of Ultra-Dense Networks With 5G Use Cases Requirements

被引:19
|
作者
Malta, Silvestre [1 ,4 ]
Pinto, Pedro [1 ,2 ,3 ]
Fernandez-Veiga, Manuel
机构
[1] Inst Politecn Viana do Castelo, Appl Digital Transformat Lab ADiT LAB, ADiT LAB, P-4900347 Viana Do Castelo, Portugal
[2] Inst Syst & Comp Engn Technol & Sci INESC TEC, P-4200465 Porto, Portugal
[3] Univ Maia, Dept Ciencias Comunicacao & Tecnol Informacao, P-4475690 Maia, Portugal
[4] Univ Vigo, AtlanTTic Res Ctr, Vigo 36310, Spain
关键词
5G mobile communication; Quality of service; Energy consumption; Energy efficiency; Power demand; Telecommunication traffic; Delays; Reinforcement learning; 5G; energy efficiency; sleep mode; reinforcement learning;
D O I
10.1109/ACCESS.2023.3236980
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In mobile networks, 5G Ultra-Dense Networks (UDNs) have emerged as they effectively increase the network capacity due to cell splitting and densification. A Base Station (BS) is a fixed transceiver that is the main communication point for one or more wireless mobile client devices. As UDNs are densely deployed, the number of BSs and communication links is dense, raising concerns about resource management with regard to energy efficiency, since BSs consume much of the total cost of energy in a cellular network. It is expected that 6G next-generation mobile networks will include technologies such as artificial intelligence as a service and focus on energy efficiency. Using machine learning it is possible to optimize energy consumption with cognitive management of dormant, inactive and active states of network elements. Reinforcement learning enables policies that allow sleep mode techniques to gradually deactivate or activate components of BSs and decrease BS energy consumption. In this work, a sleep mode management based on State Action Reward State Action (SARSA) is proposed, which allows the use of specific metrics to find the best tradeoff between energy reduction and Quality of Service (QoS) constraints. The results of the simulations show that, depending on the target of the 5G use case, in low traffic load scenarios and when a reduction in energy consumption is preferred over QoS, it is possible to achieve energy savings up to 80% with 50 ms latency, 75% with 20 ms and 10 ms latencies and 20% with 1 ms latency. If the QoS is preferred, then the energy savings reach a maximum of 5% with minimal impact in terms of latency.
引用
收藏
页码:5417 / 5428
页数:12
相关论文
共 50 条
  • [41] A Reinforcement Learning Approach for Network Slicing in 5G Networks
    Amonarriz-Pagola, Inigo
    Alvaro Fernandez-Carrasco, Jose
    2023 JNIC CYBERSECURITY CONFERENCE, JNIC, 2023,
  • [42] Optimal Resource Allocation Considering Non-Uniform Spatial Traffic Distribution in Ultra-Dense Networks: A Multi-Agent Reinforcement Learning Approach
    Kim, Eunjin
    Choi, Hyun-Ho
    Kim, Hyungsub
    Na, Jeehyeon
    Lee, Howon
    IEEE ACCESS, 2022, 10 : 20455 - 20464
  • [43] Energy Efficient Clustering and Resource Allocation Strategy for Ultra-Dense Networks: A Machine Learning Framework
    Sharma, Nidhi
    Kumar, Krishan
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (02): : 1884 - 1897
  • [44] A Model-Driven Deep Reinforcement Learning Heuristic Algorithm for Resource Allocation in Ultra-Dense Cellular Networks
    Liao, Xiaomin
    Shi, Jia
    Li, Zan
    Zhang, Lei
    Xia, Baiqiang
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (01) : 983 - 997
  • [45] Deep-Learning-Based System-Level Energy Efficiency Maximization in Ultra-Dense Micro Cell Networks
    Ma, Zhongyu
    Zhang, Ning
    Gao, Yuxi
    Han, Guangjie
    Pu, Jianbing
    Hao, Zhanjun
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2025, 14 (02) : 435 - 439
  • [46] Reinforcement Learning Based 5G Enabled Cognitive Radio Networks
    Puspita, Ratih Hikmah
    Shah, Syed Danial Ali
    Lee, Gyu-min
    Roh, Byeong-hee
    Oh, Jimyeong
    Kang, Sungjin
    2019 10TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC): ICT CONVERGENCE LEADING THE AUTONOMOUS FUTURE, 2019, : 555 - 558
  • [47] Adaptive Modulation and Coding based on Reinforcement Learning for 5G Networks
    Mota, Mateus P.
    Araujo, Daniel C.
    Costa Neto, Francisco Hugo
    de Almeida, Andrd L. E.
    Cavalcanti, E. Rodrigo P.
    2019 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2019,
  • [48] Transfer Reinforcement Learning for 5G New Radio mmWave Networks
    Elsayed, Medhat
    Erol-Kantarci, Melike
    Yanikomeroglu, Halim
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (05) : 2838 - 2849
  • [49] Trajectory optimization for UAV-assisted relay over 5G networks based on reinforcement learning framework
    Abohashish, Sara M. M.
    Rizk, Rawya Y.
    Elsedimy, E. I.
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2023, 2023 (01)
  • [50] Energy-efficient joint resource allocation in 5G HetNet using Multi-Agent Parameterized Deep Reinforcement learning
    Mughees, Amna
    Tahir, Mohammad
    Sheikh, Muhammad Aman
    Amphawan, Angela
    Meng, Yap Kian
    Ahad, Abdul
    Chamran, Kazem
    PHYSICAL COMMUNICATION, 2023, 61