Dynamic link utilization empowered by reinforcement learning for adaptive storage allocation in MANET

被引:7
作者
Anand, R. P. Prem [1 ]
Senthilkumar, V. [2 ]
Kumar, Gokul [3 ]
Rajendran, A. [4 ]
Rajaram, A. [5 ]
机构
[1] SCSVMV Univ, Dept Elect & Commun Engn, Kanchipuram 631561, India
[2] Er Perumal Manimekalai Coll Engn, Hosur 635117, India
[3] Ross Techsys, Bangalore 562129, India
[4] Karpagam Coll Engn, Dept ECE, Coimbatore, India
[5] EGS Pillay Engn Coll, Dept Elect & Commun Engn, Nagapattinam 611002, India
关键词
Position middle storage space allocation; Dynamic link utilization with reinforcement learning; Network quality; Data storage optimization; Data transmission;
D O I
10.1007/s00500-023-09281-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In modern wireless networks, mobile nodes often deal with the challenge of maintaining a sufficient number of data packets due to limited storage capacity within each cluster. It adversely impacts network performance by compromising data quality during transmissions. The ensuing delays, caused by data packets awaiting storage allocation, result in reduced throughput and increased end-to-end latency. To effectively address these issues, we present a Dynamic Link Utilization with Reinforcement Learning (DLU-RL) method, which is designed to optimize storage allocation for communication data packets, significantly enhancing network performance. Instead of static allocation, DLU-RL employs dynamic strategies guided by reinforcement learning algorithms. This innovative method not only tackles storage constraints but also proactively adapts to varying network conditions and traffic patterns. In our approach, we first perform a comprehensive analysis of storage capacities across all nodes, establishing a baseline for dynamic resource allocation. The DLU-RL framework then swiftly assigns storage space based on real-time demand and priority, optimizing storage utilization on the fly. As a result of implementing DLU-RL, substantial enhancements in throughput and concurrent minimization of end-to-end delays are achieved. This research not only contributes to efficient storage allocation techniques but also pioneers the integration of reinforcement learning for wireless communication network performance optimization. The proposed framework signifies a paradigm shift in storage management, offering adaptability, efficiency, and real-time optimization to tackle the evolving challenges of wireless communication.
引用
收藏
页码:5275 / 5285
页数:11
相关论文
共 19 条
  • [1] Abbasi S., 2023, Decision Analytics Journal, V6, DOI [10.1016/j.dajour.2023.100189, DOI 10.1016/J.DAJOUR.2023.100189]
  • [2] Abbasi S, 2023, Journal of Engineering Research, DOI [10.1016/j.jer.2023.100098, DOI 10.1016/J.JER.2023.100098]
  • [3] Green Closed-Loop Supply Chain Network Design During the Coronavirus (COVID-19) Pandemic: a Case Study in the Iranian Automotive Industry
    Abbasi, Sina
    Daneshmand-Mehr, Maryam
    Ghane Kanafi, Armin
    [J]. ENVIRONMENTAL MODELING & ASSESSMENT, 2023, 28 (01) : 69 - 103
  • [4] Designing Sustainable Recovery Network of End-of-Life Product during the COVID-19 Pandemic: A Real and Applied Case Study
    Abbasi, Sina
    Daneshmand-Mehr, Maryam
    Ghane Kanafi, Armin
    [J]. DISCRETE DYNAMICS IN NATURE AND SOCIETY, 2022, 2022
  • [5] Anand R. P. Prem, 2020, IOP Conference Series: Materials Science and Engineering, V994, DOI 10.1088/1757-899X/994/1/012031
  • [6] BOTTA A, 2023, 2023 IFIP NETW C IFI, P1
  • [7] Dynamic Video Delivery using Deep Reinforcement Learning for Device-to-Device Underlaid Cache-Enabled Internet-of-vehicle Networks
    Choi, Minseok
    Shin, Myungjae
    Kim, Joongheon
    [J]. JOURNAL OF COMMUNICATIONS AND NETWORKS, 2021, 23 (02) : 117 - 128
  • [8] A Link Quality Prediction Method for Wireless Sensor Networks Based on XGBoost
    Feng, Yi
    Liu, Linlan
    Shu, Jian
    [J]. IEEE ACCESS, 2019, 7 : 155229 - 155241
  • [9] Deep Reinforcement Learning-Based Spectrum Allocation Algorithm in Internet of Vehicles Discriminating Services
    Guan, Zheng
    Wang, Yuyang
    He, Min
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (03):
  • [10] Jay N, 2019, PR MACH LEARN RES, V97