Reinforcement Learning for Traffic-Adaptive Sleep Mode Management in 5G Networks

被引:11
作者
Masoudi, Meysam [1 ]
Khafagy, Mohammad Galal [1 ,2 ]
Soroush, Ebrahim [4 ]
Giacomelli, Daniele [3 ]
Morosi, Simone [3 ]
Cavdar, Cicek [1 ]
机构
[1] KTH Royal Inst Technol, Stockholm, Sweden
[2] Amer Univ Cairo AUC, Cairo, Egypt
[3] Univ Florence, Florence, Italy
[4] Zi Tel Co, Tehran, Iran
来源
2020 IEEE 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (IEEE PIMRC) | 2020年
关键词
5G; base station sleeping; discontinuous transmission; energy efficiency; reinforcement learning;
D O I
10.1109/pimrc48278.2020.9217286
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In mobile networks, base stations (BSs) have the largest share in energy consumption. To reduce BS energy consumption, BS components with similar (de)activation times can be grouped and put into sleep during their times of inactivity. The deeper and the more energy saving a sleep mode (SM) is, the longer (de)activation time it takes to wake up, which incurs a proportional service interruption. Therefore, it is challenging to timely decide on the best SM, bearing in mind the daily traffic fluctuation and imposed service level constraints on delay/dropping. In this study, we leverage an online reinforcement learning technique, i.e., SARSA, and propose an algorithm to decide which SM to choose given time and BS load. We use real mobile traffic obtained from a BS in Stockholm to evaluate the performance of the proposed algorithm. Simulation results show that considerable energy saving can be achieved at the cost of acceptable delay, i.e., wake-up time until we serve users, compared to two lower/upper baselines, namely, fixed (non-adaptive) SMs and optimal non-causal solution.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Adaptive Parameterized Control for Coordinated Traffic Management Using Reinforcement Learning
    Sun, Dingshan
    Jamshidnejad, Anahita
    De Schutter, Bart
    [J]. IFAC PAPERSONLINE, 2023, 56 (02): : 5463 - 5468
  • [32] Load Analysis and Sleep Mode Optimization for Energy-Efficient 5G Small Cell Networks
    Celebi, Haluk
    Guvenc, Ismail
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2017, : 1159 - 1164
  • [33] Admission Control and Virtual Network Embedding in 5G Networks: A Deep Reinforcement-Learning Approach
    Troia, Sebastian
    Vanegas, Andres Felipe Rodriguez
    Zorello, Ligia Maria Moreira
    Maier, Guido
    [J]. IEEE ACCESS, 2022, 10 : 15860 - 15875
  • [34] Reliability-aware Dynamic Service Chain Scheduling in 5G Networks based on Reinforcement Learning
    Jia, Junzhong
    Yang, Lei
    Cao, Jiannong
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [35] SMART USAGE OF MULTIPLE RAT IN IOT-ORIENTED 5G NETWORKS: A REINFORCEMENT LEARNING APPROACH
    Sandoval, Ruben M.
    Canovas-Carrasco, Sebastian
    Garcia-Sanchez, Antonio-Javier
    Garcia-Haro, Joan
    [J]. 2018 ITU KALEIDOSCOPE: MACHINE LEARNING FOR A 5G FUTURE (ITU K), 2018,
  • [36] Strategic Honeypot Deployment in Ultra-Dense Beyond 5G Networks: A Reinforcement Learning Approach
    Radoglou-Grammatikis, Panagiotis
    Sarigiannidis, Panagiotis
    Diamantoulakis, Panagiotis
    Lagkas, Thomas
    Saoulidis, Theocharis
    Fountoukidis, Eleftherios
    Karagiannidis, George
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2024, 12 (02) : 643 - 655
  • [37] Energy Optimization With Multi-Sleeping Control in 5G Heterogeneous Networks Using Reinforcement Learning
    Amine, Ali El
    Chaiban, Jean-Paul
    Hassan, Hussein Al Haj
    Dini, Paolo
    Nuaymi, Loutfi
    Achkar, Roger
    [J]. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (04): : 4310 - 4322
  • [38] Deep Reinforcement Learning Based Dynamic Reputation Policy in 5G Based Vehicular Communication Networks
    Gyawali, Sohan
    Qian, Yi
    Hu, Rose Qingyang
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (06) : 6136 - 6146
  • [39] User association-based load balancing using reinforcement learning in 5G heterogeneous networks
    Ramesh, Parameswaran
    Bhuvaneswari, P. T. V.
    Dhanushree, V. S.
    Gokul, G.
    Sahana, S.
    [J]. JOURNAL OF SUPERCOMPUTING, 2025, 81 (01)
  • [40] Reinforcement Learning with Adaptive Networks
    Sasaki, Tomoki
    Yamada, Satoshi
    [J]. 2017 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION SCIENCES (ICRAS), 2017, : 1 - 5