RIS-Aided Proactive Mobile Network Downlink Interference Suppression: A Deep Reinforcement Learning Approach

被引:2
作者
Wang, Yingze [1 ]
Sun, Mengying [1 ]
Cui, Qimei [1 ]
Chen, Kwang-Cheng [2 ]
Liao, Yaxin [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Natl Engn Lab Mobile Network Technol, Beijing 100876, Peoples R China
[2] Univ S Florida, Dept Elect Engn, Tampa, FL 33620 USA
基金
中国国家自然科学基金;
关键词
proactive mobile network (PMN); reconfigurable intelligent surface (RIS); asynchronous advantage actor-critic (A3C); interference suppression; reinforcement learning (RL); AD-HOC NETWORKS; TRANSMISSION CAPACITY; INTELLIGENT; RELAY; OPTIMIZATION; ALLOCATION;
D O I
10.3390/s23146550
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
A proactive mobile network (PMN) is a novel architecture enabling extremely low-latency communication. This architecture employs an open-loop transmission mode that prohibits all real-time control feedback processes and employs virtual cell technology to allocate resources non-exclusively to users. However, such a design also results in significant potential user interference and worsens the communication's reliability. In this paper, we propose introducing multi-reconfigurable intelligent surface (RIS) technology into the downlink process of the PMN to increase the network's capacity against interference. Since the PMN environment is complex and time varying and accurate channel state information cannot be acquired in real time, it is challenging to manage RISs to service the PMN effectively. We begin by formulating an optimization problem for RIS phase shifts and reflection coefficients. Furthermore, motivated by recent developments in deep reinforcement learning (DRL), we propose an asynchronous advantage actor-critic (A3C)-based method for solving the problem by appropriately designing the action space, state space, and reward function. Simulation results indicate that deploying RISs within a region can significantly facilitate interference suppression. The proposed A3C-based scheme can achieve a higher capacity than baseline schemes and approach the upper limit as the number of RISs increases.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Deep Learning-Based Hybrid Precoding for RIS-Aided Broadband Terahertz Communication Systems in the Face of Beam Squint
    Yuan, Qijiang
    Xiao, Lixia
    He, Chunlin
    Xiao, Pei
    Jiang, Tao
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2024, 13 (02) : 303 - 307
  • [32] Deep Reinforcement Learning for Computation Rate Maximization in RIS-Enabled Mobile Edge Computing
    Xu, Jianpeng
    Ai, Bo
    Wu, Lina
    Zhang, Yaoyuan
    Wang, Weirong
    Li, Huiya
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (07) : 10862 - 10866
  • [33] Deep Reinforcement Learning for Backscatter-Aided Data Offloading in Mobile Edge Computing
    Gong, Shimin
    Xie, Yutong
    Xu, Jing
    Niyato, Dusit
    Liang, Ying-Chang
    IEEE NETWORK, 2020, 34 (05): : 106 - 113
  • [34] Optimizing Age of Information in RIS-Assisted NOMA Networks: A Deep Reinforcement Learning Approach
    Feng, Xue
    Fu, Shu
    Fang, Fang
    Yu, Fei Richard
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (10) : 2100 - 2104
  • [35] Throughput Maximization in NOMA Enhanced RIS-Assisted Multi-UAV Networks: A Deep Reinforcement Learning Approach
    Tang, Runzhi
    Wang, Junxuan
    Zhang, Yanyan
    Jiang, Fan
    Zhang, Xuewei
    Du, Jianbo
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (01) : 730 - 745
  • [36] Space Information Network Resource Scheduling for Cloud Computing: A Deep Reinforcement Learning Approach
    Wang, Yufei
    Liu, Jun
    Yin, Yanhua
    Tong, Yu
    Liu, Jiansheng
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [37] STAR-RIS-aided secure communications for MEC with delay/energy-constrained QoS using multi-agent deep reinforcement learning
    Song, Boxuan
    Wang, Fei
    AD HOC NETWORKS, 2024, 154
  • [38] Multi-Agent Reinforcement Learning-Based Joint Precoding and Phase Shift Optimization for RIS-Aided Cell-Free Massive MIMO Systems
    Zhu, Yiyang
    Shi, Enyu
    Liu, Ziheng
    Zhang, Jiayi
    Ai, Bo
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (09) : 14015 - 14020
  • [39] Wireless Access Control in Edge-Aided Disaster Response: A Deep Reinforcement Learning-Based Approach
    Zhou, Hang
    Wang, Xiaoyan
    Umehira, Masahiro
    Chen, Xianfu
    Wu, Celimuge
    Ji, Yusheng
    IEEE ACCESS, 2021, 9 : 46600 - 46611
  • [40] Edge intelligence computing for mobile augmented reality with deep reinforcement learning approach
    Chen, Miaojiang
    Liu, Wei
    Wang, Tian
    Liu, Anfeng
    Zeng, Zhiwen
    COMPUTER NETWORKS, 2021, 195