A Reinforcement Learning Based Data Caching in Wireless Networks

被引:4
作者
Sheraz, Muhammad [1 ]
Shafique, Shahryar [1 ]
Imran, Sohail [1 ]
Asif, Muhammad [2 ]
Ullah, Rizwan [3 ]
Ibrar, Muhammad [4 ]
Khan, Jahanzeb [1 ]
Wuttisittikulkij, Lunchakorn [3 ]
机构
[1] Iqra Natl Univ, Dept Elect Engn, Peshawar 25000, Pakistan
[2] Univ Sci & Technol, Dept Elect Engn, Main Campus, Bannu 28100, Pakistan
[3] Chulalongkorn Univ, Dept Elect Engn, Wireless Commun Ecosyst Res Unit, Bangkok 10330, Thailand
[4] Islamia Coll Peshawar, Dept Phys, Peshawar 25000, Pakistan
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 11期
关键词
caching; network delay; small base station; 5G; dynamic data popularity; reinforcement learning; Q-learning; PLACEMENT; DEVICE;
D O I
10.3390/app12115692
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Data caching has emerged as a promising technique to handle growing data traffic and backhaul congestion of wireless networks. However, there is a concern regarding how and where to place contents to optimize data access by the users. Data caching can be exploited close to users by deploying cache entities at Small Base Stations (SBSs). In this approach, SBSs cache contents through the core network during off-peak traffic hours. Then, SBSs provide cached contents to content-demanding users during peak traffic hours with low latency. In this paper, we exploit the potential of data caching at the SBS level to minimize data access delay. We propose an intelligence-based data caching mechanism inspired by an artificial intelligence approach known as Reinforcement Learning (RL). Our proposed RL-based data caching mechanism is adaptive to dynamic learning and tracks network states to capture users' diverse and varying data demands. Our proposed approach optimizes data caching at the SBS level by observing users' data demands and locations to efficiently utilize the limited cache resources of SBS. Extensive simulations are performed to evaluate the performance of proposed caching mechanism based on various factors such as caching capacity, data library size, etc. The obtained results demonstrate that our proposed caching mechanism achieves 4% performance gain in terms of delay vs. contents, 3.5% performance gain in terms of delay vs. users, 2.6% performance gain in terms of delay vs. cache capacity, 18% performance gain in terms of percentage traffic offloading vs. popularity skewness (gamma), and 6% performance gain in terms of backhaul saving vs. cache capacity.
引用
收藏
页数:14
相关论文
共 35 条
  • [31] Mode Selection and Resource Allocation in Sliced Fog Radio Access Networks: A Reinforcement Learning Approach
    Xiang, Hongyu
    Peng, Mugen
    Sun, Yaohua
    Yan, Shi
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (04) : 4271 - 4284
  • [32] On Mobile Edge Caching
    Yao, Jingjing
    Han, Tao
    Ansari, Nirwan
    [J]. IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2019, 21 (03): : 2525 - 2553
  • [33] Cost-Effective Cache Deployment in Mobile Heterogeneous Networks
    Zhang, Shan
    Zhang, Ning
    Yang, Peng
    Shen, Xuemin
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2017, 66 (12) : 11264 - 11276
  • [34] Caching on the Move: A User Interest-Driven Caching Strategy for D2D Content Sharing
    Zhang, Wei
    Wu, Dan
    Yang, Wendong
    Cai, Yueming
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (03) : 2958 - 2971
  • [35] Incentive-Driven Deep Reinforcement Learning for Content Caching and D2D Offloading
    Zhou, Huan
    Wu, Tong
    Zhang, Haijun
    Wu, Jie
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (08) : 2445 - 2460