Power Allocation for Millimeter-Wave Railway Systems with Multi-Agent Deep Reinforcement Learning

被引:0
|
作者
Xu, Jianpeng [1 ,2 ]
Ai, Bo [1 ,2 ]
Sun, Yannan [1 ,2 ]
Chen, Yali [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, State Key Lab Rail Traff Control & Safety, Beijing, Peoples R China
[2] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing, Peoples R China
关键词
High-speed railway (HSR); millimeter-wave communications; hybrid beamforming; power allocation; multiagent deep reinforcement learning;
D O I
10.1109/GLOBECOM42002.2020.9322607
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Railway is evolving into the new era of smart railway. Unfortunately, the challenge of obtaining accurate instantaneous channel state information in high-speed railway (HSR) scenario makes it difficult to apply conventional power allocation schemes. In this paper, we propose an innovative experience-driven power allocation algorithm which is capable of learning power decisions from its own experience instead of the accurate mathematical model, just like one person learns one new skill, e.g. driving. To be specific, with the purpose of maximizing the achievable sum rate, we first formulate a joint hybrid beamforming and power allocation problem based on the millimeter-wave HSR channel model. Then, both at the transmitters (TXs) and receivers (RXs), we obtain the solution of beamforming design. Finally, experience-driven power allocation algorithm with multi-agent deep reinforcement learning is proposed. The numerical results indicate that the spectral efficiency of proposed algorithm significantly outperforms the existing state-of-the-art schemes.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Learning to Communicate with Deep Multi-Agent Reinforcement Learning
    Foerster, Jakob N.
    Assael, Yannis M.
    de Freitas, Nando
    Whiteson, Shimon
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [32] Deep Reinforcement Learning for Interference Management in Millimeter-Wave Networks
    Dahal, Madan
    Vaezi, Mojtaba
    2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 1064 - 1069
  • [33] Assured Deep Multi-Agent Reinforcement Learning for Safe Robotic Systems
    Riley, Joshua
    Calinescu, Radu
    Paterson, Colin
    Kudenko, Daniel
    Banks, Alec
    AGENTS AND ARTIFICIAL INTELLIGENCE, ICAART 2021, 2022, 13251 : 158 - 180
  • [34] Multi-agent Deep Reinforcement Learning for Countering Uncrewed Aerial Systems
    Pierre, Jean-Elie
    Sun, Xiang
    Novick, David
    Fierro, Rafael
    DISTRIBUTED AUTONOMOUS ROBOTIC SYSTEMS, DARS 2022, 2024, 28 : 394 - 407
  • [35] MAGNet: Multi-agent Graph Network for Deep Multi-agent Reinforcement Learning
    Malysheva, Aleksandra
    Kudenko, Daniel
    Shpilman, Aleksei
    2019 XVI INTERNATIONAL SYMPOSIUM PROBLEMS OF REDUNDANCY IN INFORMATION AND CONTROL SYSTEMS (REDUNDANCY), 2019, : 171 - 176
  • [36] Dealing with Limited Backhaul Capacity in Millimeter-Wave Systems: A Deep Reinforcement Learning Approach
    Feng, Mingjie
    Mao, Shiwen
    IEEE COMMUNICATIONS MAGAZINE, 2019, 57 (03) : 50 - 55
  • [37] Multi-Agent Power and Resource Allocation for D2D Communications: A Deep Reinforcement Learning Approach
    Xiang, Honglin
    Peng, Jingyi
    Gao, Zhen
    Li, Lingjie
    Yang, Yang
    2022 IEEE 96TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-FALL), 2022,
  • [38] Action Space-independent Exploration Methods in Multi-agent Deep Reinforcement Learning for Wireless Power Allocation
    Kopic, Amna
    Perenda, Erma
    Gacanin, Haris
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [39] Multi-Agent Deep Reinforcement Learning-Empowered Channel Allocation in Vehicular Networks
    Kumar, Anitha Saravana
    Zhao, Lian
    Fernando, Xavier
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (02) : 1726 - 1736
  • [40] Deep multi-agent reinforcement learning for resource allocation in NOMA-enabled MEC
    Waqar, Noor
    Hassan, Syed Ali
    Pervaiz, Haris
    Jung, Haejoon
    Dev, Kapal
    COMPUTER COMMUNICATIONS, 2022, 196 : 1 - 8