Dynamic Coded Caching in Cellular Networks with User Mobility: A Reinforcement Learning Method

被引:0
|
作者
Zhu, Guangyu [1 ]
Guo, Caili [1 ]
Zhang, Tiankui [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Informat & Commun Engn, Beijing, Peoples R China
来源
2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL | 2023年
基金
中国国家自然科学基金;
关键词
coded caching; mobility; dynamic networks; reinforcement learning;
D O I
10.1109/VTC2023-Fall60731.2023.10333414
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Coded caching manages to release cellular network traffic by increasing transmission rate via satisfying multiple user requests simultaneously. Specific contents stored in the private cache memory are used as side information to decode individual requests from the coded broadcasting messages. Considering local content popularity could improve caching performance dramatically. However, in the mobility scenario, local content popularity varies with user movements. Even worse, contents in the cache memory might become outdated when the user location changes. In this paper, we propose a dynamic coded caching scheme that reduces the loss of coded caching gain due to the user movement and local content popularity dynamic changing. We quantify the relationship between user preference, local popularity, and user mobility. We formulate a metric to measure the performance of the proposed coded caching scheme and propose a reinforcement learning problem to obtain the cache replacement strategy in the mobility scenario. Numerical results verify that our obtained replacement policy significantly outperforms the popularity-based, least-frequently-used, and multilayer replacement policy in terms of traffic offloading.
引用
收藏
页数:5
相关论文
共 50 条
  • [41] Data Rate and Handoff Rate Analysis for User Mobility in Cellular Networks
    Tokuyama, Kiichi
    Miyoshi, Naoto
    2018 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2018,
  • [42] Dynamic Sparse Coded Multi-Hop Transmissions Using Reinforcement Learning
    Gao, Ruifeng
    Li, Ye
    Wang, Jue
    Quek, Tony Q. S.
    IEEE COMMUNICATIONS LETTERS, 2020, 24 (10) : 2206 - 2210
  • [43] Using Reinforcement Learning to Allocate and Manage SFC in Cellular Networks
    Santos, Guto Leoni
    Kelner, Judith
    Sadok, Djamel
    Endo, Patricia Takako
    2020 16TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT (CNSM), 2020,
  • [44] Joint Content Caching and Recommendation in Opportunistic Mobile Networks Through Deep Reinforcement Learning and Broad Learning
    Yu, Dongjin
    Wu, Tong
    Liu, Chengfei
    Wang, Dongjing
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (04) : 2727 - 2741
  • [45] QoE-Driven Edge Caching in Vehicle Networks Based on Deep Reinforcement Learning
    Song, Chunhe
    Xu, Wenxiang
    Wu, Tingting
    Yu, Shimao
    Zeng, Peng
    Zhang, Ning
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (06) : 5286 - 5295
  • [46] Game Theory and Reinforcement Learning Based Secure Edge Caching in Mobile Social Networks
    Xu, Qichao
    Su, Zhou
    Lu, Rongxing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 3415 - 3429
  • [47] Effects of Heterogenous Mobility on Rate Adaptation and User Scheduling in Cellular Networks With HARQ
    Kim, Su Min
    Jung, Bang Chul
    Choi, Wan
    Sung, Dan Keun
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2013, 62 (06) : 2735 - 2748
  • [48] A Service Recommendation System Based on Dynamic User Groups and Reinforcement Learning
    Zhang, En
    Ma, Wenming
    Zhang, Jinkai
    Xia, Xuchen
    ELECTRONICS, 2023, 12 (24)
  • [49] A deep reinforcement learning for user association and power control in heterogeneous networks
    Ding, Hui
    Zhao, Feng
    Tian, Jie
    Li, Dongyang
    Zhang, Haixia
    AD HOC NETWORKS, 2020, 102
  • [50] Reinforcement Learning Based Dynamic Power Control for UAV Mobility Management
    Meer, Irshad A.
    Besser, Karl-Ludwig
    Ozger, Mustafa
    Poor, H. Vincent
    Cavdar, Cicek
    FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF, 2023, : 724 - 728