Energy-Efficient Carrier Aggregation in 5G Using Constrained Multi-Agent MDP

被引:0
|
作者
Elsayed, Medhat [1 ,3 ]
Joda, Roghayeh [1 ,3 ]
Khoramnejad, Fahime [2 ]
Chan, David [1 ]
Sediq, Akram Bin [3 ]
Boudreau, Gary [3 ]
Erol-Kantarci, Melike [2 ]
机构
[1] Univ Ottawa, Sch Elect Engn & Comp Sci, Ottawa, ON K1N 6N5, Canada
[2] Univ Manitoba, Dept Elect & Comp Engn, Winnipeg, MB R3T 2N2, Canada
[3] Ericsson Canada, Dept Res & Dev, Ottawa, ON K2K 2V6, Canada
来源
IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING | 2024年 / 8卷 / 04期
基金
加拿大自然科学与工程研究理事会;
关键词
Throughput; Power demand; 5G mobile communication; Energy consumption; Q-learning; Heuristic algorithms; Energy efficiency; 5G; carrier activation/deactivation; component carrier manager; traffic splitting; reinforcement learning; constrained multi-agent MDP; RADIO RESOURCE-MANAGEMENT; DESIGN;
D O I
10.1109/TGCN.2024.3386066
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Carrier Aggregation (CA) is a promising technology in LTE and 5G networks that enhances the throughput of the users. However, since each User Equipment (UE) has to continuously monitor the activated Component Carriers (CCs) in CA, the UE energy consumption increases. To reduce the energy consumption while maximizing the throughput of UEs, we propose a dynamic and proactive CC management scheme for 5G, using a Q-Learning algorithm. To address our problem, we first model the corresponding Constrained Multi-agent Markov Decision Process (CMMDP) model and then utilize the Q-Learning algorithm to solve it. The time inter-arrival and the size of the next incoming bursts of data are proactively predicted and, along with the data in the buffer, are considered in the state space and the reward function of the machine learning model. Our proposed scheme is compared to three baseline schemes. In the first and second baseline algorithms, all CCs and only single CC are activated for each UE, respectively. For the last baseline algorithm, we simplify our Reinforcement Learning (RL) algorithm, in which the remaining data in the scheduling buffer of users is not considered and also the throughput and the number of activated CCs is balanced in the low traffic load. Simulation results reveal that our proposed Q-Learning algorithm outperforms the baselines. It achieves the same throughput as the all CC activation algorithm while reducing the UE power consumption by about 20%. These benefits are achieved by dynamically activating and deactivating CCs according to the UE traffic pattern.
引用
收藏
页码:1595 / 1606
页数:12
相关论文
共 50 条
  • [1] Delay-Aware and Energy-Efficient Carrier Aggregation in 5G Using Double Deep Q-Networks
    Khoramnejad, Fahime
    Joda, Roghayeh
    Bin Sediq, Akram
    Abou-Zeid, Hatem
    Atawia, Ramy
    Boudreau, Gary
    Erol-Kantarci, Melike
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (10) : 6615 - 6629
  • [2] Energy-efficient joint resource allocation in 5G HetNet using Multi-Agent Parameterized Deep Reinforcement learning
    Mughees, Amna
    Tahir, Mohammad
    Sheikh, Muhammad Aman
    Amphawan, Angela
    Meng, Yap Kian
    Ahad, Abdul
    Chamran, Kazem
    PHYSICAL COMMUNICATION, 2023, 61
  • [3] Reinforcement Learning for Energy-Efficient 5G Massive MIMO: Intelligent Antenna Switching
    Hoffmann, Marcin
    Kryszkiewicz, Pawel
    IEEE ACCESS, 2021, 9 : 130329 - 130339
  • [4] An Energy-Efficient Collaborative Caching Scheme for 5G Wireless Network
    Furqan, Muhammad
    Yan, Wen
    Zhang, Cheng
    Iqbal, Shahid
    Jan, Qasim
    Huang, Yongming
    IEEE ACCESS, 2019, 7 : 156907 - 156916
  • [5] Towards Multi-Agent Control in Energy-Efficient Data Centres
    Berezovskaya, Yulia
    Yang, Chen-Wei
    Vyatkin, Valeriy
    IECON 2020: THE 46TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2020, : 3574 - 3579
  • [6] Reinforcement Learning Based Energy-Efficient Component Carrier Activation-Deactivation in 5G
    Elsayed, Medhat
    Joda, Roghayeh
    Abou-Zeid, Hatem
    Atawia, Ramy
    Bin Sediq, Akram
    Boudreau, Gary
    Erol-Kantarci, Melike
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [7] Energy-Efficient Strategies for Multi-Agent Continuous Cooperative Patrolling Problems
    Wu, Lingying
    Sugiyama, Ayumi
    Sugawara, Toshiharu
    KNOWLEDGE-BASED AND INTELLIGENT INFORMATION & ENGINEERING SYSTEMS (KES 2019), 2019, 159 : 465 - 474
  • [8] Energy-Efficient and Reliable Internet of Things for 5G: A Framework for Interference Control
    Ahmed Osman, Radwa
    Zaki, Amira I.
    ELECTRONICS, 2020, 9 (12) : 1 - 18
  • [9] Joint Downlink and Uplink Resource Allocation for Energy-Efficient Carrier Aggregation
    Yu, Guanding
    Chen, Qimei
    Yin, Rui
    Zhang, Huazi
    Li, Geoffrey Ye
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2015, 14 (06) : 3207 - 3218
  • [10] Joint Security and Energy-Efficient Cooperative Architecture for 5G Underlaying Cellular Networks
    Guo, Li
    Zhu, Zhiliang
    Lau, Francis C. M.
    Zhao, Yuli
    Yu, Hai
    SYMMETRY-BASEL, 2022, 14 (06):