Dynamic resource allocation in 5G networks using hybrid RL-CNN model for optimized latency and quality of service

被引:0
作者
Karuppiyan, Muthulakshmi [1 ,5 ]
Subramani, Hariharan [2 ]
Raju, Shanthy Kandasamy [3 ]
Prakasam, Manimekalai Maradi Anthonymuthu [4 ]
机构
[1] Sri Krishna Coll Technol, Dept Elect & Commun Engn, Coimbatore, India
[2] Panimalar Engn Coll, Dept Comp Sci & Engn, Chennai, India
[3] Loyola Inst Technol, Dept Elect & Commun Engn, Chennai, India
[4] Karunya Inst Technol & Sci, Dept Elect & Commun Engn, Coimbatore, India
[5] Sri Krishna Coll Technol, Dept Elect & Commun Engn, Coimbatore 641042, India
关键词
5G networks; dynamic resource allocation; convolutional neural networks; reinforcement learning; quality of service; latency;
D O I
10.1080/0954898X.2024.2334282
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid deployment of 5G networks necessitates innovative solutions for efficient and dynamic resource allocation. Current strategies, although effective to some extent, lack real-time adaptability and scalability in complex, dynamically-changing environments. This paper introduces the Dynamic Resource Allocator using RL-CNN (DRARLCNN), a novel machine learning model addressing these shortcomings. By merging Convolutional Neural Networks (CNN) for feature extraction and Reinforcement Learning (RL) for decision-making, DRARLCNN optimizes resource allocation, minimizing latency and maximizing Quality of Service (QoS). Utilizing a state-of-the-art "5G Resource Allocation Dataset", the research employs Python, TensorFlow, and OpenAI Gym to implement and test the model in a simulated 5 G environment. Results demonstrate the effectiveness of DRARLCNN, showcasing an impressive R2 score of 0.517, MSE of 0.035, and RMSE of 0.188, surpassing existing methods in allocation efficiency and latency. The DRARLCNN model not only outperforms existing methods in allocation efficiency and latency but also sets a new benchmark for future research in dynamic 5G resource allocation. Through its innovative approach and promising results, DRARLCNN opens avenues for further advancements in optimizing resource allocation within dynamic 5G networks.
引用
收藏
页数:25
相关论文
共 20 条
  • [11] Patel VS., 2021, IEEE Trans Veh Technol, V70, P90, DOI [10.1109/TVT.2021.3052979, DOI 10.1109/TVT.2021.3052979]
  • [12] A Survey of End-to-End Solutions for Reliable Low-Latency Communications in 5G Networks
    Rico, Delia
    Merino, Pedro
    [J]. IEEE ACCESS, 2020, 8 : 192808 - 192834
  • [13] Singh NR., 2023, IEEE Int Things J, V10, P2870, DOI [10.1109/JIOT.2023.3064910, DOI 10.1109/JIOT.2023.3064910]
  • [14] Smith RB., 2021, J Wireless Commun, V22, P112, DOI [10.1016/j.wcm.2021.05.004, DOI 10.1016/J.WCM.2021.05.004]
  • [15] A survey of deep reinforcement learning application in 5G and beyond network slicing and virtualization
    Ssengonzi, Charles
    Kogeda, Okuthe P.
    Olwal, Thomas O.
    [J]. ARRAY, 2022, 14
  • [16] Joint utility-based uplink power and rate allocation in wireless networks: A non-cooperative game theoretic framework
    Tsiropoulou, Eirini Eleni
    Vamvakas, Panagiotis
    Papavassiliou, Symeon
    [J]. PHYSICAL COMMUNICATION, 2013, 9 : 299 - 307
  • [17] Multi-Agent attention-based deep reinforcement learning for demand response in grid-responsive buildings*
    Xie, Jiahan
    Ajagekar, Akshay
    You, Fengqi
    [J]. APPLIED ENERGY, 2023, 342
  • [18] DRL-Based Partial Offloading for Maximizing Sum Computation Rate of Wireless Powered Mobile Edge Computing Network
    Zhang, Shubin
    Gu, Hui
    Chi, Kaikai
    Huang, Liang
    Yu, Keping
    Mumtaz, Shahid
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (12) : 10934 - 10948
  • [19] Zhou QF., 2022, J Comput Netw, V36, P154, DOI [10.1016/j.comnet.2022.01.010, DOI 10.1016/J.COMNET.2022.01.010]
  • [20] Resource Sharing and Task Offloading in IoT Fog Computing: A Contract-Learning Approach
    Zhou, Zhenyu
    Liao, Haijun
    Gu, Bo
    Mumtaz, Shahid
    Rodriguez, Jonathan
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2020, 4 (03): : 227 - 240