Deep Reinforcement Learning for Cooperative Content Caching in Vehicular Edge Computing and Networks

被引:263
|
作者
Qiao, Guanhua [1 ]
Leng, Supeng [1 ]
Maharjan, Sabita [2 ]
Zhang, Yan [3 ]
Ansari, Nirwan [4 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
[2] Simula Metropolitan Ctr Digital Engn, Ctr Resilient Networks & Applicat, N-0167 Oslo, Norway
[3] Univ Oslo, Dept Informat, N-0316 Oslo, Norway
[4] New Jersey Inst Technol, Dept Elect & Comp Engn, Adv Networking Lab, Newark, NJ 07102 USA
来源
IEEE INTERNET OF THINGS JOURNAL | 2020年 / 7卷 / 01期
基金
欧盟地平线“2020”;
关键词
Cooperative caching; Optimization; Edge computing; Computational modeling; Internet of Things; Indexes; Base stations; Content delivery; content placement; cooperative edge caching; deep deterministic policy gradient (DDPG); double time-scale Markov decision process (DTS-MDP); vehicular edge computing and networks;
D O I
10.1109/JIOT.2019.2945640
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we propose a cooperative edge caching scheme, a new paradigm to jointly optimize the content placement and content delivery in the vehicular edge computing and networks, with the aid of the flexible trilateral cooperations among a macro-cell station, roadside units, and smart vehicles. We formulate the joint optimization problem as a double time-scale Markov decision process (DTS-MDP), based on the fact that the time-scale of content timeliness changes less frequently as compared to the vehicle mobility and network states during the content delivery process. At the beginning of the large time-scale, the content placement/updating decision can be obtained according to the content popularity, vehicle driving paths, and resource availability. On the small time-scale, the joint vehicle scheduling and bandwidth allocation scheme is designed to minimize the content access cost while satisfying the constraint on content delivery latency. To solve the long-term mixed integer linear programming (LT-MILP) problem, we propose a nature-inspired method based on the deep deterministic policy gradient (DDPG) framework to obtain a suboptimal solution with a low computation complexity. The simulation results demonstrate that the proposed cooperative caching system can reduce the system cost, as well as the content delivery latency, and improve content hit ratio, as compared to the noncooperative and random edge caching schemes.
引用
收藏
页码:247 / 257
页数:11
相关论文
共 50 条
  • [1] Deep Reinforcement Learning for Cooperative Edge Caching in Vehicular Networks
    Xing, Yuping
    Sun, Yanhua
    Qiao, Lan
    Wang, Zhuwei
    Si, Pengbo
    Zhang, Yanhua
    2021 13TH INTERNATIONAL CONFERENCE ON COMMUNICATION SOFTWARE AND NETWORKS (ICCSN 2021), 2021, : 144 - 149
  • [2] Deep Reinforcement Learning and Permissioned Blockchain for Content Caching in Vehicular Edge Computing and Networks
    Dai, Yueyue
    Xu, Du
    Zhang, Ke
    Maharjan, Sabita
    Zhang, Yan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (04) : 4312 - 4324
  • [3] Permissioned Blockchain and Deep Reinforcement Learning for Content Caching in Vehicular Edge Computing and Networks
    Dai, Yueyue
    Xu, Du
    Zhang, Ke
    Maharjan, Sabita
    Zhang, Yan
    2019 11TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2019,
  • [4] Vehicular edge cloud computing content caching optimization solution based on content prediction and deep reinforcement learning
    Zhu, Lin
    Li, Bingxian
    Tan, Long
    AD HOC NETWORKS, 2024, 165
  • [5] Deep Reinforcement Learning for Edge Caching with Mobility Prediction in Vehicular Networks
    Choi, Yoonjeong
    Lim, Yujin
    SENSORS, 2023, 23 (03)
  • [6] Deep Reinforcement Learning for Collaborative Edge Computing in Vehicular Networks
    Li, Mushu
    Gao, Jie
    Zhao, Lian
    Shen, Xuemin
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2020, 6 (04) : 1122 - 1135
  • [7] Mobility-Aware Cooperative Caching in Vehicular Edge Computing Based on Asynchronous Federated and Deep Reinforcement Learning
    Wu, Qiong
    Zhao, Yu
    Fan, Qiang
    Fan, Pingyi
    Wang, Jiangzhou
    Zhang, Cui
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2023, 17 (01) : 66 - 81
  • [8] Deep Reinforcement Learning for Cooperative Edge Caching in Future Mobile Networks
    Li, Ding
    Han, Yiwen
    Wang, Chenyang
    Shi, GaoTao
    Wang, Xiaofei
    Li, Xiuhua
    Leung, Victor C. M.
    2019 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2019,
  • [9] Efficient Vehicular Edge Computing: A Novel Approach With Asynchronous Federated and Deep Reinforcement Learning for Content Caching in VEC
    Yang, Wentao
    Liu, Zhibin
    IEEE ACCESS, 2024, 12 : 13196 - 13212
  • [10] Learning IoV in Edge: Deep Reinforcement Learning for Edge Computing Enabled Vehicular Networks
    Xu, Shilin
    Guo, Caili
    Hu, Rose Qingyang
    Qian, Yi
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,