Reinforcement Learning for Traffic-Adaptive Sleep Mode Management in 5G Networks

被引:13
作者
Masoudi, Meysam [1 ]
Khafagy, Mohammad Galal [1 ,2 ]
Soroush, Ebrahim [4 ]
Giacomelli, Daniele [3 ]
Morosi, Simone [3 ]
Cavdar, Cicek [1 ]
机构
[1] KTH Royal Inst Technol, Stockholm, Sweden
[2] Amer Univ Cairo AUC, Cairo, Egypt
[3] Univ Florence, Florence, Italy
[4] Zi Tel Co, Tehran, Iran
来源
2020 IEEE 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (IEEE PIMRC) | 2020年
关键词
5G; base station sleeping; discontinuous transmission; energy efficiency; reinforcement learning;
D O I
10.1109/pimrc48278.2020.9217286
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In mobile networks, base stations (BSs) have the largest share in energy consumption. To reduce BS energy consumption, BS components with similar (de)activation times can be grouped and put into sleep during their times of inactivity. The deeper and the more energy saving a sleep mode (SM) is, the longer (de)activation time it takes to wake up, which incurs a proportional service interruption. Therefore, it is challenging to timely decide on the best SM, bearing in mind the daily traffic fluctuation and imposed service level constraints on delay/dropping. In this study, we leverage an online reinforcement learning technique, i.e., SARSA, and propose an algorithm to decide which SM to choose given time and BS load. We use real mobile traffic obtained from a BS in Stockholm to evaluate the performance of the proposed algorithm. Simulation results show that considerable energy saving can be achieved at the cost of acceptable delay, i.e., wake-up time until we serve users, compared to two lower/upper baselines, namely, fixed (non-adaptive) SMs and optimal non-causal solution.
引用
收藏
页数:6
相关论文
共 50 条
[41]   User association-based load balancing using reinforcement learning in 5G heterogeneous networks [J].
Ramesh, Parameswaran ;
Bhuvaneswari, P. T. V. ;
Dhanushree, V. S. ;
Gokul, G. ;
Sahana, S. .
JOURNAL OF SUPERCOMPUTING, 2025, 81 (01)
[42]   Reinforcement Learning with Adaptive Networks [J].
Sasaki, Tomoki ;
Yamada, Satoshi .
2017 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION SCIENCES (ICRAS), 2017, :1-5
[43]   An Adaptive Framework for Resource Allocation in 5G Vehicular Networks [J].
Vijayan, Rajilal Manathala ;
Granelli, Fabrizio ;
Umamakeswari, A. .
INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2025,
[44]   TAS-MAC: A Traffic-Adaptive Synchronous MAC Protocol for Wireless Sensor Networks [J].
Liu, Chin-Jung ;
Huang, Pei ;
Xiao, Li .
ACM TRANSACTIONS ON SENSOR NETWORKS, 2016, 12 (01) :1-30
[45]   Adaptive Sleep-Wake Control using Reinforcement Learning in Sensor Networks [J].
Prashanth, L. A. ;
Chatterjee, Abhranil ;
Bhatnagar, Shalabh .
2014 SIXTH INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS AND NETWORKS (COMSNETS), 2014,
[46]   An efficient energy saving scheme using reinforcement learning for 5G and in H-CRAN [J].
Fourati, Hasna ;
Maaloul, Rihab ;
Trabelsi, Nessrine ;
Chaari, Lamia ;
Jmaiel, Mohamed .
AD HOC NETWORKS, 2024, 155
[47]   Resource allocation for UAV-assisted 5G mMTC slicing networks using deep reinforcement learning [J].
Gupta, Rohit Kumar ;
Kumar, Saubhik ;
Misra, Rajiv .
TELECOMMUNICATION SYSTEMS, 2023, 82 (01) :141-159
[48]   Reinforcement Learning for Energy-Efficient 5G Massive MIMO: Intelligent Antenna Switching [J].
Hoffmann, Marcin ;
Kryszkiewicz, Pawel .
IEEE ACCESS, 2021, 9 :130329-130339
[49]   Reinforcement Learning-Based Optimization for Drone Mobility in 5G and Beyond Ultra-Dense Networks [J].
Tanveer, Jawad ;
Haider, Amir ;
Ali, Rashid ;
Kim, Ajung .
CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 68 (03) :3807-3823
[50]   Resource allocation for UAV-assisted 5G mMTC slicing networks using deep reinforcement learning [J].
Rohit Kumar Gupta ;
Saubhik Kumar ;
Rajiv Misra .
Telecommunication Systems, 2023, 82 :141-159