Accelerating Deep Reinforcement Learning With the Aid of Partial Model: Energy-Efficient Predictive Video Streaming

被引:9
作者
Liu, Dong [1 ]
Zhao, Jianyu [2 ]
Yang, Chenyang [2 ]
Hanzo, Lajos [1 ]
机构
[1] Univ Southampton, Sch Elect & Comp Sci, Southampton SO17 1BJ, Hants, England
[2] Beihang Univ, Sch Elect & Informat Engn, Beijing 100191, Peoples R China
基金
英国工程与自然科学研究理事会; 中国国家自然科学基金; 欧洲研究理事会;
关键词
Streaming media; Wireless communication; Safety; Resource management; Predictive models; Heuristic algorithms; Servers; Deep reinforcement learning; convergence speed; constraint; energy efficiency; video streaming; NETWORKING;
D O I
10.1109/TWC.2021.3053319
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Predictive power allocation is conceived for energy-efficient video streaming over mobile networks using deep reinforcement learning. The goal is to minimize the accumulated energy consumption of each base station over a complete video streaming session under the constraint that avoids video playback interruptions. To handle the continuous state and action spaces, we resort to deep deterministic policy gradient (DDPG) algorithm for solving the formulated problem. In contrast to previous predictive power allocation policies that first predict future information with historical data and then optimize the power allocation based on the predicted information, the proposed policy operates in an on-line and end-to-end manner. By judiciously designing the action and state that only depend on slowly-varying average channel gains, we reduce the signaling overhead between the edge server and the base stations, and make it easier to learn a good policy. To further avoid playback interruption throughout the learning process and improve the convergence speed, we exploit the partially known model of the system dynamics by integrating the concepts of safety layer, post-decision state, and virtual experiences into the basic DDPG algorithm. Our simulation results show that the proposed policies converge to the optimal policy that is derived based on perfect large-scale channel prediction and outperform the first-predict-then-optimize policy in the presence of prediction errors. By harnessing the partially known model, the convergence speed can be dramatically improved. The code for reproducing the results of this article is available at https://github.com/fluidy/twc2020.
引用
收藏
页码:3734 / 3748
页数:15
相关论文
共 28 条
  • [11] Gu S, 2017, INT C LEARN REPR
  • [12] Exploiting Future Radio Resources With End-to-End Prediction by Deep Learning
    Guo, Jia
    Yang, Chenyang
    I, Chih-Lin
    [J]. IEEE ACCESS, 2018, 6 : 75729 - 75747
  • [13] Hu Y. C., 2015, ETSI white paper, V11, P1
  • [14] Kernel-Based Adaptive Online Reconstruction of Coverage Maps With Side Information
    Kasparick, Martin
    Cavalcante, Renato L. G.
    Valentin, Stefan
    Stanczak, Slawomir
    Yukawa, Masahiro
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2016, 65 (07) : 5461 - 5473
  • [15] Lillicrap T. P., 2016, P INT C LEARN REPR S, P1
  • [16] A Deep Reinforcement Learning Approach to Proactive Content Pushing and Recommendation for Mobile Users
    Liu, Dong
    Yang, Chenyang
    [J]. IEEE ACCESS, 2019, 7 : 83120 - 83136
  • [17] Liu DD, 2019, PROCEEDINGS OF 2019 IEEE 8TH JOINT INTERNATIONAL INFORMATION TECHNOLOGY AND ARTIFICIAL INTELLIGENCE CONFERENCE (ITAIC 2019), P1, DOI [10.1109/ITAIC.2019.8785709, 10.1109/itaic.2019.8785709]
  • [18] Applications of Deep Reinforcement Learning in Communications and Networking: A Survey
    Luong, Nguyen Cong
    Hoang, Dinh Thai
    Gong, Shimin
    Niyato, Dusit
    Wang, Ping
    Liang, Ying-Chang
    Kim, Dong In
    [J]. IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2019, 21 (04): : 3133 - 3174
  • [19] Fast Reinforcement Learning for Energy-Efficient Wireless Communication
    Mastronarde, Nicholas
    van der Schaar, Mihaela
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2011, 59 (12) : 6262 - 6266
  • [20] Human-level control through deep reinforcement learning
    Mnih, Volodymyr
    Kavukcuoglu, Koray
    Silver, David
    Rusu, Andrei A.
    Veness, Joel
    Bellemare, Marc G.
    Graves, Alex
    Riedmiller, Martin
    Fidjeland, Andreas K.
    Ostrovski, Georg
    Petersen, Stig
    Beattie, Charles
    Sadik, Amir
    Antonoglou, Ioannis
    King, Helen
    Kumaran, Dharshan
    Wierstra, Daan
    Legg, Shane
    Hassabis, Demis
    [J]. NATURE, 2015, 518 (7540) : 529 - 533