RIS-Aided Proactive Mobile Network Downlink Interference Suppression: A Deep Reinforcement Learning Approach

被引:2
作者
Wang, Yingze [1 ]
Sun, Mengying [1 ]
Cui, Qimei [1 ]
Chen, Kwang-Cheng [2 ]
Liao, Yaxin [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Natl Engn Lab Mobile Network Technol, Beijing 100876, Peoples R China
[2] Univ S Florida, Dept Elect Engn, Tampa, FL 33620 USA
基金
中国国家自然科学基金;
关键词
proactive mobile network (PMN); reconfigurable intelligent surface (RIS); asynchronous advantage actor-critic (A3C); interference suppression; reinforcement learning (RL); AD-HOC NETWORKS; TRANSMISSION CAPACITY; INTELLIGENT; RELAY; OPTIMIZATION; ALLOCATION;
D O I
10.3390/s23146550
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
A proactive mobile network (PMN) is a novel architecture enabling extremely low-latency communication. This architecture employs an open-loop transmission mode that prohibits all real-time control feedback processes and employs virtual cell technology to allocate resources non-exclusively to users. However, such a design also results in significant potential user interference and worsens the communication's reliability. In this paper, we propose introducing multi-reconfigurable intelligent surface (RIS) technology into the downlink process of the PMN to increase the network's capacity against interference. Since the PMN environment is complex and time varying and accurate channel state information cannot be acquired in real time, it is challenging to manage RISs to service the PMN effectively. We begin by formulating an optimization problem for RIS phase shifts and reflection coefficients. Furthermore, motivated by recent developments in deep reinforcement learning (DRL), we propose an asynchronous advantage actor-critic (A3C)-based method for solving the problem by appropriately designing the action space, state space, and reward function. Simulation results indicate that deploying RISs within a region can significantly facilitate interference suppression. The proposed A3C-based scheme can achieve a higher capacity than baseline schemes and approach the upper limit as the number of RISs increases.
引用
收藏
页数:18
相关论文
共 50 条
[41]   A Heuristically Assisted Deep Reinforcement Learning Approach for Network Slice Placement [J].
Esteves, Jose Jurandir Alves ;
Boubendir, Amina ;
Guillemin, Fabrice ;
Sens, Pierre .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (04) :4794-4806
[42]   Active RIS-Assisted Sup-Degree of Freedom Interference Suppression for a Large Antenna Array: A Deep-Learning Approach With Location Awareness [J].
Zhang, Zhengjie ;
Cao, Hailin ;
Fan, Jin ;
Peng, Junhui ;
Liu, Sheng .
IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, 2024, 72 (01) :628-641
[43]   Cooperative Multiagent Deep Reinforcement Learning Methods for UAV-Aided Mobile Edge Computing Networks [J].
Kim, Mintae ;
Lee, Hoon ;
Hwang, Sangwon ;
Debbah, Merouane ;
Lee, Inkyu .
IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (23) :38040-38053
[44]   Hybrid IRS-Assisted Secure Satellite Downlink Communications: A Fast Deep Reinforcement Learning Approach [J].
Ngo, Quynh Tu ;
Phan, Khoa Tran ;
Mahmood, Abdun ;
Xiang, Wei .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (04) :2858-2869
[45]   Deep-Reinforcement-Learning-Based Optimal Transmission Policies for Opportunistic UAV-Aided Wireless Sensor Network [J].
Liu, Yitong ;
Yan, Junjie ;
Zhao, Xiaohui .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (15) :13823-13836
[46]   Cooperative Multiagent Deep Reinforcement Learning for Computation Offloading: A Mobile Network Operator Perspective [J].
Li, Kexin ;
Wang, Xingwei ;
He, Qiang ;
Yi, Bo ;
Morichetta, Andrea ;
Huang, Min .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (23) :24161-24173
[47]   Age-Based Scheduling for Mobile Edge Computing: A Deep Reinforcement Learning Approach [J].
He, Xingqiu ;
You, Chaoqun ;
Quek, Tony Q. S. .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (10) :9881-9897
[48]   UAV-Assisted Fair Communication for Mobile Networks: A Multi-Agent Deep Reinforcement Learning Approach [J].
Zhou, Yi ;
Jin, Zhanqi ;
Shi, Huaguang ;
Wang, Zhangyun ;
Lu, Ning ;
Liu, Fuqiang .
REMOTE SENSING, 2022, 14 (22)
[49]   Joint Task Offloading and Service Migration in RIS assisted Vehicular Edge Computing Network Based on Deep Reinforcement Learning [J].
Ning, Xiangrui ;
Zeng, Ming ;
Fei, Zesong .
2024 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC, 2024, :1037-1042
[50]   Cooperative Multigroup Broadcast 360° Video Delivery Network: A Hierarchical Federated Deep Reinforcement Learning Approach [J].
Hu, Fenghe ;
Deng, Yansha ;
Aghvami, A. Hamid .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (06) :4009-4024