Policy-Gradient-Based Reinforcement Learning for Computing Resources Allocation in O-RAN

被引:8
作者
Sharara, Mahdi [1 ]
Pamuklu, Turgay [2 ]
Hoteit, Sahar [1 ]
Veque, Veronique [1 ]
Erol-Kantarci, Melike [2 ]
机构
[1] Univ Paris Saclay, Lab Signaux & Syst, CNRS, Cent Supelec, Gif Sur Yvette, France
[2] Univ Ottawa, Sch Elect Engn & Comp Sci, Ottawa, ON, Canada
来源
PROCEEDINGS OF THE 2022 IEEE 11TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING (IEEE CLOUDNET 2022) | 2022年
关键词
O-RAN; Integer Linear Programming; Reinforcement Learning; Computing Resources Allocation; 6G;
D O I
10.1109/CloudNet55617.2022.9978863
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Open Radio Access Network (O-RAN) is a novel architecture aiming to disaggregate the network components to reduce capital and operational costs and open the interfaces to ensure interoperability. In this work, we consider the problem of allocating computing resources to process the data of enhanced Mobile BroadBand (eMBB) users and Ultra-Reliable Low-Latency (URLLC) Users. Supposing the processing of users' frames from different base stations is done in a shared O-Cloud, we model the computing resources allocation problem as an Integer Linear Programming (ILP) problem that aims at fairly allocating computing resources to eMBB and URLLC users and optimizing the QoS of URLLC users without neglecting eMBB users. Due to the high complexity of solving an ILP problem, we model the problem using Reinforcement Learning (RL). Our results demonstrate the ability of our RL-based solution to perform close to the ILP solver while having much lower computational complexity. For a different number of Open Radio Units (O-RUs), the objective value of the RL agent does not deviate from the ILP objective by more than 6%.
引用
收藏
页码:229 / 236
页数:8
相关论文
共 50 条
[21]   Deep Reinforcement Learning-Based Joint User Association and CU-DU Placement in O-RAN [J].
Joda, Roghayeh ;
Pamuklu, Turgay ;
Iturria-Rivera, Pedro Enrique ;
Erol-Kantarci, Melike .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (04) :4097-4110
[22]   OpenAI dApp: An Open AI Platform for Distributed Federated Reinforcement Learning Apps in O-RAN [J].
Kouchaki, Mohammadreza ;
Abdalla, Aly Sabri ;
Marojevic, Vuk .
2023 IEEE FUTURE NETWORKS WORLD FORUM, FNWF, 2024,
[23]   Hierarchical Reinforcement Learning Based Resource Allocation for RAN Slicing [J].
Anil Akyildiz, Hasan ;
Faruk Gemici, Omer ;
Hokelek, Ibrahim ;
Ali Cirpan, Hakan .
IEEE ACCESS, 2024, 12 :75818-75831
[24]   A Joint Allocation Algorithm of Computing and Communication Resources Based on Reinforcement Learning in MEC System [J].
Liu, Qinghua ;
Li, Qingping .
JOURNAL OF INFORMATION PROCESSING SYSTEMS, 2021, 17 (04) :721-736
[25]   O-RAN with Machine Learning in ns-3 [J].
Garey, Wesley ;
Ropitault, Tanguy ;
Rouil, Richard ;
Black, Evan ;
Gao, Weichao .
PROCEEDINGS OF THE 2023 WORKSHOP ON NS-3, WNS3 2023, 2023, :60-68
[26]   O-RAN based proactive ANR optimization [J].
Kumar, Hemant ;
Sapru, Vivek ;
Jaisawal, Sandeep Kumar .
2020 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2020,
[27]   Unmasking Vulnerabilities: Adversarial Attacks against DRL-based Resource Allocation in O-RAN [J].
Ergu, Yared Abera ;
Nguyen, Van-Linh ;
Hwang, Ren-Hung ;
Lin, Ying-Dar ;
Cho, Chuan-Yu ;
Yang, Hui-Kuo .
ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, :2378-2383
[28]   A Clustered Federated Learning Paradigm with Model Ensemble in O-RAN [J].
Wang, Jingyi ;
Yang, Bei ;
Li, Wei ;
Zhang, Ziyang .
2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
[29]   Data-driven Approach for Optimising Resource Allocation of O-RAN Networks [J].
Mahmoud, Haitham ;
Farooqui, Muhammad Najmul Islam ;
Mi, De ;
Guo, Liucheng ;
Lu, Chen ;
Gan, Yuxi ;
Gao, Zhen ;
Wang, Ziwei ;
Zhang, Yunsheng .
2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
[30]   Dynamic CU-DU Selection for Resource Allocation in O-RAN Using Actor-Critic Learning [J].
Mollahasani, Shahram ;
Erol-Kantarci, Melike ;
Wilson, Rodney .
2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,