Energy-Efficient Ultra-Dense Network With Deep Reinforcement Learning

被引:32
作者
Ju, Hyungyu [1 ,2 ]
Kim, Seungnyun [1 ,2 ]
Kim, Youngjoon [3 ]
Shim, Byonghyo [1 ,2 ]
机构
[1] Seoul Natl Univ, Inst New Media & Commun, Seoul 08826, South Korea
[2] Seoul Natl Univ, Dept Elect & Comp Engn, Seoul 08826, South Korea
[3] Samsung Res, Adv Commun Res Ctr, Seoul 06765, South Korea
基金
新加坡国家研究基金会;
关键词
Energy consumption; Power demand; Fading channels; Ultra-dense networks; Throughput; Reinforcement learning; Downlink; Wireless resource management; deep reinforcement learning (DRL); ultra-dense networks (UDNs); energy efficiency;
D O I
10.1109/TWC.2022.3150425
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the explosive growth in mobile data traffic, ultra-dense network (UDN) where a large number of small cells are densely deployed on top of macro cells has received a great deal of attention in recent years. While UDN offers a number of benefits, an upsurge of energy consumption in UDN due to the intensive deployment of small cells has now become a major bottleneck in achieving the primary goals viz., 100-fold increase in the throughput in 5G+ and 6G. In recent years, an approach to reduce the energy consumption of base stations (BSs) by selectively turning off the lightly-loaded BSs, referred to as the sleep mode technique, has been suggested. However, determining an appropriate active/sleep modes of BSs is a difficult task due to the huge computational overhead and inefficiency caused by the frequent BS mode conversion. An aim of this paper is to propose a deep reinforcement learning (DRL)-based approach to achieve a reduction of energy consumption in UDN. Key ingredient of the proposed scheme is to use decision selection network to reduce the size of action space. Numerical results show that the proposed scheme can significantly reduce the energy consumption of UDN while ensuring the rate requirement of network.
引用
收藏
页码:6539 / 6552
页数:14
相关论文
共 30 条
[21]   A Unified Base Station Switching Framework Considering Both Uplink and Downlink Traffic [J].
Oh, Eunsung ;
Son, Kyuho .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2017, 6 (01) :30-33
[22]   Mastering the game of Go without human knowledge [J].
Silver, David ;
Schrittwieser, Julian ;
Simonyan, Karen ;
Antonoglou, Ioannis ;
Huang, Aja ;
Guez, Arthur ;
Hubert, Thomas ;
Baker, Lucas ;
Lai, Matthew ;
Bolton, Adrian ;
Chen, Yutian ;
Lillicrap, Timothy ;
Hui, Fan ;
Sifre, Laurent ;
van den Driessche, George ;
Graepel, Thore ;
Hassabis, Demis .
NATURE, 2017, 550 (7676) :354-+
[23]  
Singh Yuvraj., 2012, International Journal of Computer Applications, V59, DOI [DOI 10.5120/9594-4216, 10.5120/ 9594-4216]
[24]  
Sutton RS, 2018, ADAPT COMPUT MACH LE, P1
[25]   Energy-Efficient Base-Stations Sleep-Mode Techniques in Green Cellular Networks: A Survey [J].
Wu, Jingjin ;
Zhang, Yujing ;
Zukerman, Moshe ;
Yung, Edward Kai-Ning .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2015, 17 (02) :803-826
[26]  
Xu ZY, 2017, IEEE ICC
[27]  
Yang J, 2021, IEEE T WIREL COMMUN, V20, P897, DOI [10.1109/TWC.2020.3029051, 10.1109/twc.2020.3029051]
[28]   Recent Trends in Deep Learning Based Natural Language Processing [J].
Young, Tom ;
Hazarika, Devamanyu ;
Poria, Soujanya ;
Cambria, Erik .
IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2018, 13 (03) :55-75
[29]  
Zahavy T, 2018, ADV NEUR IN, V31
[30]  
Zhao N., 2018, IEEE GLOB COMM CONF, P1