Accelerating Deep Reinforcement Learning With the Aid of Partial Model: Energy-Efficient Predictive Video Streaming

被引:10
作者
Liu, Dong [1 ]
Zhao, Jianyu [2 ]
Yang, Chenyang [2 ]
Hanzo, Lajos [1 ]
机构
[1] Univ Southampton, Sch Elect & Comp Sci, Southampton SO17 1BJ, Hants, England
[2] Beihang Univ, Sch Elect & Informat Engn, Beijing 100191, Peoples R China
基金
英国工程与自然科学研究理事会; 中国国家自然科学基金; 欧洲研究理事会;
关键词
Streaming media; Wireless communication; Safety; Resource management; Predictive models; Heuristic algorithms; Servers; Deep reinforcement learning; convergence speed; constraint; energy efficiency; video streaming; NETWORKING;
D O I
10.1109/TWC.2021.3053319
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Predictive power allocation is conceived for energy-efficient video streaming over mobile networks using deep reinforcement learning. The goal is to minimize the accumulated energy consumption of each base station over a complete video streaming session under the constraint that avoids video playback interruptions. To handle the continuous state and action spaces, we resort to deep deterministic policy gradient (DDPG) algorithm for solving the formulated problem. In contrast to previous predictive power allocation policies that first predict future information with historical data and then optimize the power allocation based on the predicted information, the proposed policy operates in an on-line and end-to-end manner. By judiciously designing the action and state that only depend on slowly-varying average channel gains, we reduce the signaling overhead between the edge server and the base stations, and make it easier to learn a good policy. To further avoid playback interruption throughout the learning process and improve the convergence speed, we exploit the partially known model of the system dynamics by integrating the concepts of safety layer, post-decision state, and virtual experiences into the basic DDPG algorithm. Our simulation results show that the proposed policies converge to the optimal policy that is derived based on perfect large-scale channel prediction and outperform the first-predict-then-optimize policy in the presence of prediction errors. By harnessing the partially known model, the convergence speed can be dramatically improved. The code for reproducing the results of this article is available at https://github.com/fluidy/twc2020.
引用
收藏
页码:3734 / 3748
页数:15
相关论文
共 28 条
[11]   Exploiting Future Radio Resources With End-to-End Prediction by Deep Learning [J].
Guo, Jia ;
Yang, Chenyang ;
I, Chih-Lin .
IEEE ACCESS, 2018, 6 :75729-75747
[12]  
Hu Y.C., 2015, ETSI white paper, V11, P1
[13]   Kernel-Based Adaptive Online Reconstruction of Coverage Maps With Side Information [J].
Kasparick, Martin ;
Cavalcante, Renato L. G. ;
Valentin, Stefan ;
Stanczak, Slawomir ;
Yukawa, Masahiro .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2016, 65 (07) :5461-5473
[14]  
LeCun Y., 2015, NATURE, V521, P436, DOI DOI 10.1038/NATURE14539
[15]  
Lillicrap TP, 2015, Continuous control with deep reinforcement learning
[16]  
Liu DH, 2019, INT C ELECTR MACH SY, P1713, DOI [10.1109/icems.2019.8921519, 10.1109/upec.2019.8893485, 10.1109/ITAIC.2019.8785709, 10.1109/itaic.2019.8785709]
[17]   A Deep Reinforcement Learning Approach to Proactive Content Pushing and Recommendation for Mobile Users [J].
Liu, Dong ;
Yang, Chenyang .
IEEE ACCESS, 2019, 7 :83120-83136
[18]   Applications of Deep Reinforcement Learning in Communications and Networking: A Survey [J].
Luong, Nguyen Cong ;
Hoang, Dinh Thai ;
Gong, Shimin ;
Niyato, Dusit ;
Wang, Ping ;
Liang, Ying-Chang ;
Kim, Dong In .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2019, 21 (04) :3133-3174
[19]   Fast Reinforcement Learning for Energy-Efficient Wireless Communication [J].
Mastronarde, Nicholas ;
van der Schaar, Mihaela .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2011, 59 (12) :6262-6266
[20]   Human-level control through deep reinforcement learning [J].
Mnih, Volodymyr ;
Kavukcuoglu, Koray ;
Silver, David ;
Rusu, Andrei A. ;
Veness, Joel ;
Bellemare, Marc G. ;
Graves, Alex ;
Riedmiller, Martin ;
Fidjeland, Andreas K. ;
Ostrovski, Georg ;
Petersen, Stig ;
Beattie, Charles ;
Sadik, Amir ;
Antonoglou, Ioannis ;
King, Helen ;
Kumaran, Dharshan ;
Wierstra, Daan ;
Legg, Shane ;
Hassabis, Demis .
NATURE, 2015, 518 (7540) :529-533