On Optimal Power Control for URLLC over a Non-stationary Wireless Channel using Contextual Reinforcement Learning

被引:0
|
作者
Sharma, Mohit K.
Sun, Sumei
Kurniawan, Ernest
Tan, Peng Hui
机构
来源
IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022) | 2022年
关键词
Energy minimization; non-stationary wireless channel; reinforcement learning; URLLC; LOW-LATENCY COMMUNICATIONS; COMMUNICATION; OPTIMIZATION; NETWORKS; SYSTEMS;
D O I
10.1109/ICC45855.2022.9839177
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
In this work we investigate the design of energy-optimal policies for ultra-reliable low-latency communications (URLLC) over a non-stationary wireless channel, using a contextual reinforcement learning (RL) framework. We consider a point-to-point communication system over a piece-wise stationary wireless channel where the Doppler frequency of the channel switches between two distinct values, depending on the underlying state of the channel. To benchmark the performance, first we consider an oracle agent which has a perfect but causal information about the switching instants, and consists of two deep RL (DRL) agents each of which is tasked with optimal decision making in a unique partially stationary environment. Comparing the performance of the oracle agent with the conventional DRL reveals that the performance gain obtained using oracle agent depends on the dynamics of the non-stationary channel. In particular, for a non-stationary channel with faster switching rate the oracle agent results in approximately 15 - 20% less energy consumption. In contrast, for a channel with slower switching rate the performance of the oracle agent is similar to the conventional DRL agent. Next, for a more realistic scenario when the information about the switching instants for the Doppler frequency of the underlying channel is not available, we model the non-stationary channel as a regime switching process modulated by a Markov process, and adapt the oracle agent by aiding a state tracking algorithm proposed for the regime switching process. Our simulation results show that the proposed algorithm yields a better performance compared to the conventional DRL agent.
引用
收藏
页码:5493 / 5498
页数:6
相关论文
共 50 条
  • [31] Geometry-Based Non-Stationary Inter-Large-Satellite Wireless Channel Model
    Chen, Xuan
    He, Yubei
    Hu, Wanru
    Yu, Guo
    Tian, Ye
    Liu, Di
    Zhang, Yufeng
    Wang, Zhenhong
    Wang, Zhugang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (09) : 10592 - 10607
  • [32] Dynamic Power Control in Wireless Body Area Networks Using Reinforcement Learning With Approximation
    Kazemi, Ramtin
    Vesilo, Rein
    Dutkiewicz, Eryk
    Liu, Ren
    2011 IEEE 22ND INTERNATIONAL SYMPOSIUM ON PERSONAL INDOOR AND MOBILE RADIO COMMUNICATIONS (PIMRC), 2011, : 2203 - 2208
  • [33] Subcarrier power control for URLLC communication system via multi-agent deep reinforcement learning in IoT network
    Wang, Haiyan
    Li, Xinmin
    Luo, Feiying
    Li, Jiahui
    Zhang, Xiaoqiang
    INTERNATIONAL JOURNAL OF COMMUNICATION NETWORKS AND DISTRIBUTED SYSTEMS, 2024, 30 (03) : 374 - 392
  • [34] Prediction-Based Multi-Agent Reinforcement Learning in Inherently Non-Stationary Environments
    Marinescu, Andrei
    Dusparic, Ivana
    Clarke, Siobhan
    ACM TRANSACTIONS ON AUTONOMOUS AND ADAPTIVE SYSTEMS, 2017, 12 (02)
  • [35] Deep Reinforcement Learning for Joint Channel Selection and Power Control in D2D Networks
    Tan, Junjie
    Liang, Ying-Chang
    Zhang, Lin
    Feng, Gang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (02) : 1363 - 1378
  • [36] Deep Reinforcement Learning for Power Controlled Channel Allocation in Wireless Avionics Intra-Communications
    Zuo, Yuanjun
    Li, Qiao
    Lu, Guangshan
    Xiong, Huagang
    IEEE ACCESS, 2021, 9 : 106964 - 106980
  • [37] Near Optimal Control Policy for Controlling Power System Stabilizers Using Reinforcement Learning
    Hadidi, Ramtin
    Jeyasurya, Benjamin
    2009 IEEE POWER & ENERGY SOCIETY GENERAL MEETING, VOLS 1-8, 2009, : 3715 - 3721
  • [38] Optimal control in microgrid using multi-agent reinforcement learning
    Li, Fu-Dong
    Wu, Min
    He, Yong
    Chen, Xin
    ISA TRANSACTIONS, 2012, 51 (06) : 743 - 751
  • [39] Optimal non-autonomous area coverage control with adaptive reinforcement learning
    Soleymani, Farzan
    Miah, Md Suruz
    Spinello, Davide
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 122
  • [40] Adaptive optimal control of stencil printing process using reinforcement learning
    Khader, Nourma
    Yoon, Sang Won
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2021, 71