Constrained Deep Reinforcement Learning for Fronthaul Compression Optimization

被引:0
作者
Gronland, Axel [1 ,2 ]
Russo, Alessio [1 ]
Jedra, Yassir [1 ]
Klaiqi, Bleron [2 ]
Gelabert, Xavier [2 ]
机构
[1] Royal Inst Technol KTH, Stockholm, Sweden
[2] Stockholm Res Ctr, Huawei Technol Sweden AB, Stockholm, Sweden
来源
2024 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR COMMUNICATION AND NETWORKING, ICMLCN 2024 | 2024年
关键词
C-RAN; fronthaul; machine learning; reinforcement learning; performance evaluation;
D O I
10.1109/ICMLCN59089.2024.10624764
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the Centralized-Radio Access Network (C-RAN) architecture, functions can be placed in the central or distributed locations. This architecture can offer higher capacity and cost savings but also puts strict requirements on the fronthaul (FH), these constraints can be any number of constraints but in this work we consider a constraint on packet loss and latency. Adaptive FH compression schemes that adapt the compression amount to varying FH traffic are promising approaches to deal with stringent FH requirements. In this work, we design such a compression scheme using a model-free off policy deep reinforcement learning algorithm which accounts for FH latency and packet loss constraints. Furthermore, this algorithm is designed for model transparency and interpretability which is crucial for AI trustworthiness in performance critical domains. We show that our algorithm can successfully choose an appropriate compression scheme while satisfying the constraints and exhibits a roughly 70% increase in FH utilization compared to a reference scheme.
引用
收藏
页码:498 / 504
页数:7
相关论文
共 50 条
  • [1] Learning-Based Latency-Constrained Fronthaul Compression Optimization in C-RAN
    Gronland, Axel
    Klaiqi, Bleron
    Gelabert, Xavier
    2023 IEEE 28TH INTERNATIONAL WORKSHOP ON COMPUTER AIDED MODELING AND DESIGN OF COMMUNICATION LINKS AND NETWORKS, CAMAD 2023, 2023, : 134 - 139
  • [2] Constrained attractor selection using deep reinforcement learning
    Wang, Xue-She
    Turner, James D.
    Mann, Brian P.
    JOURNAL OF VIBRATION AND CONTROL, 2021, 27 (5-6) : 502 - 514
  • [3] Scan Chain Clustering and Optimization with Constrained Clustering and Reinforcement Learning
    Abdul, Naiju Karim
    Antony, George
    Rao, Rahul M.
    Skariah, Suriya T.
    MLCAD '22: PROCEEDINGS OF THE 2022 ACM/IEEE 4TH WORKSHOP ON MACHINE LEARNING FOR CAD (MLCAD), 2022, : 83 - 89
  • [4] Deep reinforcement learning-based framework for constrained any-objective optimization
    Honari H.
    Khodaygan S.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (07) : 9575 - 9591
  • [5] Online VNF Placement using Deep Reinforcement Learning and Reward Constrained Policy Optimization
    Mohamed, Ramy
    Avgeris, Marios
    Leivadeas, Aris
    Lambadaris, Ioannis
    2024 IEEE INTERNATIONAL MEDITERRANEAN CONFERENCE ON COMMUNICATIONS AND NETWORKING, MEDITCOM 2024, 2024, : 269 - 274
  • [6] Deep Reinforcement Learning for Multiobjective Optimization
    Li, Kaiwen
    Zhang, Tao
    Wang, Rui
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (06) : 3103 - 3114
  • [7] Reinforcement learning for deep portfolio optimization
    Yan, Ruyu
    Jin, Jiafei
    Han, Kun
    ELECTRONIC RESEARCH ARCHIVE, 2024, 32 (09): : 5176 - 5200
  • [8] Deep Reinforcement Learning for Optimization at Early Design Stages
    Servadei, Lorenzo
    Lee, Jin Hwa
    Arjona Medina, Jose A.
    Werner, Michael
    Hochreiter, Sepp
    Ecker, Wolfgang
    Wille, Robert
    IEEE DESIGN & TEST, 2023, 40 (01) : 43 - 51
  • [9] Optimization of Delta-Sigma Modulator Based on Reinforcement Learning for Mobile Fronthaul
    Yan, Zijun
    Zhu, Yixiao
    Yang, Guangying
    Hu, Weisheng
    IEEE PHOTONICS TECHNOLOGY LETTERS, 2025, 37 (07) : 397 - 400
  • [10] Constrained Reinforcement Learning for Dynamic Optimization under Uncertainty
    Petsagkourakis, P.
    Sandoval, I. O.
    Bradford, E.
    Zhang, D.
    del Rio-Chanona, E. A.
    IFAC PAPERSONLINE, 2020, 53 (02): : 11264 - 11270