Using Reinforcement Learning to Reduce Energy Consumption of Ultra-Dense Networks With 5G Use Cases Requirements

被引:19
|
作者
Malta, Silvestre [1 ,4 ]
Pinto, Pedro [1 ,2 ,3 ]
Fernandez-Veiga, Manuel
机构
[1] Inst Politecn Viana do Castelo, Appl Digital Transformat Lab ADiT LAB, ADiT LAB, P-4900347 Viana Do Castelo, Portugal
[2] Inst Syst & Comp Engn Technol & Sci INESC TEC, P-4200465 Porto, Portugal
[3] Univ Maia, Dept Ciencias Comunicacao & Tecnol Informacao, P-4475690 Maia, Portugal
[4] Univ Vigo, AtlanTTic Res Ctr, Vigo 36310, Spain
关键词
5G mobile communication; Quality of service; Energy consumption; Energy efficiency; Power demand; Telecommunication traffic; Delays; Reinforcement learning; 5G; energy efficiency; sleep mode; reinforcement learning;
D O I
10.1109/ACCESS.2023.3236980
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In mobile networks, 5G Ultra-Dense Networks (UDNs) have emerged as they effectively increase the network capacity due to cell splitting and densification. A Base Station (BS) is a fixed transceiver that is the main communication point for one or more wireless mobile client devices. As UDNs are densely deployed, the number of BSs and communication links is dense, raising concerns about resource management with regard to energy efficiency, since BSs consume much of the total cost of energy in a cellular network. It is expected that 6G next-generation mobile networks will include technologies such as artificial intelligence as a service and focus on energy efficiency. Using machine learning it is possible to optimize energy consumption with cognitive management of dormant, inactive and active states of network elements. Reinforcement learning enables policies that allow sleep mode techniques to gradually deactivate or activate components of BSs and decrease BS energy consumption. In this work, a sleep mode management based on State Action Reward State Action (SARSA) is proposed, which allows the use of specific metrics to find the best tradeoff between energy reduction and Quality of Service (QoS) constraints. The results of the simulations show that, depending on the target of the 5G use case, in low traffic load scenarios and when a reduction in energy consumption is preferred over QoS, it is possible to achieve energy savings up to 80% with 50 ms latency, 75% with 20 ms and 10 ms latencies and 20% with 1 ms latency. If the QoS is preferred, then the energy savings reach a maximum of 5% with minimal impact in terms of latency.
引用
收藏
页码:5417 / 5428
页数:12
相关论文
共 50 条
  • [31] Reinforcement Learning Based Cooperative Coded Caching Under Dynamic Popularities in Ultra-Dense Networks
    Gao, Shen
    Dong, Peihao
    Pan, Zhiwen
    Li, Geoffrey Ye
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (05) : 5442 - 5456
  • [32] Landscape-enabled algorithmic design for the cell switch-off problem in 5G ultra-dense networks
    Galeano-Brajones, Jesus
    Luna, Francisco
    Carmona-Murillo, Javier
    Nebro, Antonio J.
    Coello, Carlos A. Coello
    Valenzuela-Valdes, Juan F.
    ENGINEERING OPTIMIZATION, 2025, 57 (01) : 309 - 331
  • [33] Flexible Reinforcement Learning Scheduler for 5G Networks
    Paz-Perez, Aurora
    Tato, Anxo
    Escudero-Garzas, J. Joaquin
    Gomez-Cuba, Felipe
    2024 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR COMMUNICATION AND NETWORKING, ICMLCN 2024, 2024, : 566 - 572
  • [34] 5G Handover using Reinforcement Learning
    Yajnanarayana, Vijaya
    Ryden, Henrik
    Hevizi, Laszlo
    2020 IEEE 3RD 5G WORLD FORUM (5GWF), 2020, : 349 - 354
  • [35] Adaptive Power Control using Reinforcement Learning in 5G Mobile Networks
    Park, Hyebin
    Lim, Yujin
    2020 34TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN 2020), 2020, : 409 - 414
  • [36] Security and 5G: Attack mitigation using Reinforcement Learning in SDN networks
    Alvaro Fernandez-Carrasco, Jose
    Segurola-Gil, Lander
    Zola, Francesco
    Orduna-Urrutia, Raul
    2022 IEEE FUTURE NETWORKS WORLD FORUM, FNWF, 2022, : 622 - 627
  • [37] Hierarchical Energy Optimization With More Realistic Power Consumption and Interference Models for Ultra-Dense Networks
    Zhuang, Hongcheng
    Chen, Jun
    Gilimyanov, Ruslan
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (07) : 4507 - 4518
  • [38] Reinforcement Learning Power Control Algorithm Based on Graph Signal Processing for Ultra-Dense Mobile Networks
    Li, Yujie
    Tang, Zhoujin
    Lin, Zhijian
    Gong, Yanfei
    Du, Xiaojiang
    Guizani, Mohsen
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (03): : 2694 - 2705
  • [39] Approaching the cell switch-off problem in 5G ultra-dense networks with dynamic multi-objective optimization
    Luna, Francisco
    Zapata-Cano, Pablo H.
    Gonzalez-Macias, Juan C.
    Valenzuela-Valdes, Juan F.
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 110 : 876 - 891
  • [40] Reinforcement learning for Admission Control in 5G Wireless Networks
    Raaijmakers, Youri
    Mandelli, Silvio
    Doll, Mark
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,