Energy-Aware Dynamic VNF Splitting in O-RAN Using Deep Reinforcement Learning

被引:5
作者
Amiri, Esmaeil [1 ]
Wang, Ning [1 ]
Shojafar, Mohammad [1 ]
Tafazolli, Rahim [1 ]
机构
[1] Univ Surrey, 5G 6GIC, Guildford GU2 7XH, England
基金
英国工程与自然科学研究理事会;
关键词
Open RAN (O-RAN); virtual network function (VNF); energy efficiency; deep reinforcement learning (DRL); NETWORK FUNCTION PLACEMENT;
D O I
10.1109/LWC.2023.3298548
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This letter proposes an innovative energy-efficient Radio Access Network (RAN) disaggregation and virtualization method for Open RAN (O-RAN) that effectively addresses the challenges posed by dynamic traffic conditions. In this case, the energy consumption is primarily formulated as a multi-objective optimization problem and then solved by integrating Advantage Actor-Critic (A2C) algorithm with a sequence-to-sequence model due to sequentially of RAN disaggregation and long-term dependencies. According to the results, our proposed solution for dynamic Virtual Network Functions (VNF) splitting outperforms approaches that do not involve VNF splitting, significantly reducing energy consumption. The solution achieves up to 56% and 63% for business and residential areas under traffic conditions, respectively.
引用
收藏
页码:1891 / 1895
页数:5
相关论文
共 41 条
  • [21] Energy-Efficient Parking Analytics System using Deep Reinforcement Learning
    Rezaei, Yoones
    Lee, Stephen
    Mosse, Daniel
    BUILDSYS'21: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILT ENVIRONMENTS, 2021, : 81 - 90
  • [22] An Energy-Efficient Dynamic Offloading Algorithm for Edge Computing Based on Deep Reinforcement Learning
    Zhu, Keyu
    Li, Shaobo
    Zhang, Xingxing
    Wang, Jinming
    Xie, Cankun
    Wu, Fengbin
    Xie, Rongxiang
    IEEE ACCESS, 2024, 12 : 127489 - 127506
  • [23] Optimizing Energy Efficiency in UAV-Assisted Networks Using Deep Reinforcement Learning
    Omoniwa, Babatunji
    Galkin, Boris
    Dusparic, Ivana
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2022, 11 (08) : 1590 - 1594
  • [24] AoI-Energy-Aware UAV-Assisted Data Collection for IoT Networks: A Deep Reinforcement Learning Method
    Sun, Mengying
    Xu, Xiaodong
    Qin, Xiaoqi
    Zhang, Ping
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (24) : 17275 - 17289
  • [25] Optimizing energy harvesting in wireless body area networks: A deep reinforcement learning approach to dynamic sampling
    Mohammadi, Razieh
    Shirmohammadi, Zahra
    ALEXANDRIA ENGINEERING JOURNAL, 2024, 109 : 157 - 175
  • [26] Mobility Management-Based Autonomous Energy-Aware Framework Using Machine Learning Approach in Dense Mobile Networks
    Asad, Syed Muhammad
    Ansari, Shuja
    Ozturk, Metin
    Rais, Rao Naveed Bin
    Dashtipour, Kia
    Hussain, Sajjad
    Abbasi, Qammer H.
    Imran, Muhammad Ali
    SIGNALS, 2020, 1 (02): : 170 - 187
  • [27] Cooperative Hierarchical Deep Reinforcement Learning-Based Joint Sleep and Power Control in RIS-Aided Energy-Efficient RAN
    Zhou, Hao
    Elsayed, Medhat
    Bavand, Majid
    Gaigalas, Raimundas
    Furr, Steve
    Erol-Kantarci, Melike
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2025, 11 (01) : 489 - 504
  • [28] Energy-Efficient Object Tracking Using Adaptive ROI Subsampling and Deep Reinforcement Learning
    Katoch, Sameeksha
    Iqbal, Odrika
    Spanias, Andreas
    Jayasuriya, Suren
    IEEE ACCESS, 2023, 11 : 41995 - 42011
  • [29] Channel Access and Power Control for Energy-Efficient Delay-Aware Heterogeneous Cellular Networks for Smart Grid Communications Using Deep Reinforcement Learning
    Asuhaimi, Fauzun Abdullah
    Bu, Shengrong
    Klaine, Paulo Valente
    Imran, Muhammad Ali
    IEEE ACCESS, 2019, 7 : 133474 - 133484
  • [30] Advanced Energy-Efficient Computation Offloading Using Deep Reinforcement Learning in MTC Edge Computing
    Khan, Israr
    Tao, Xiaofeng
    Rahman, G. M. Shafiqur
    Rehman, Waheed Ur
    Salam, Tabinda
    IEEE ACCESS, 2020, 8 (82867-82875) : 82867 - 82875