Distributed Deep Multi-Agent Reinforcement Learning for Cooperative Edge Caching in Internet-of-Vehicles

被引:80
作者
Zhou, Huan [1 ,2 ]
Jiang, Kai [3 ]
He, Shibo [4 ]
Min, Geyong [5 ]
Wu, Jie [6 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710129, Peoples R China
[2] China Three Gorges Univ, Coll Comp & Informat Technol, Yichang 443002, Peoples R China
[3] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430000, Peoples R China
[4] Zhejiang Univ, Coll Control Sci & Technol, Hangzhou 310027, Peoples R China
[5] Univ Exeter, Coll Engn Math & Phys Sci, Dept Comp Sci, Exeter EX4 4QF, England
[6] Temple Univ, Dept Comp & Informat Sci, Philadelphia, PA 19122 USA
基金
中国国家自然科学基金;
关键词
Computer architecture; Delays; Costs; Backhaul networks; Reinforcement learning; Quality of service; Optimization; Edge caching; Internet-of-Vehicles; content delivery; cache replacement; multi-agent reinforcement learning;
D O I
10.1109/TWC.2023.3272348
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Edge caching is a promising approach to reduce duplicate content transmission in Internet-of-Vehicles (IoVs). Several Reinforcement Learning (RL) based edge caching methods have been proposed to improve the resource utilization and reduce the backhaul traffic load. However, they only obtain the local sub-optimal solution, as they neglect the influence from environments by other agents. This paper investigates the edge caching strategies with consideration of the content delivery and cache replacement by exploiting the distributed Multi-Agent Reinforcement Learning (MARL). A hierarchical edge caching architecture for IoVs is proposed and the corresponding problem is formulated with the goal to minimize the long-term content access cost in the system. Then, we extend the Markov Decision Process (MDP) in the single agent RL to the context of a multi-agent system, and tackle the corresponding combinatorial multi-armed bandit problem based on the framework of a stochastic game. Specifically, we firstly propose a Distributed MARL-based Edge caching method (DMRE), where each agent can adaptively learn its best behaviour in conjunction with other agents for intelligent caching. Meanwhile, we attempt to reduce the computation complexity of DMRE by parameter approximation, which legitimately simplifies the training targets. However, DMRE is enabled to represent and update the parameter by creating a lookup table, essentially a tabular-based method, which generally performs inefficiently in large-scale scenarios. To circumvent the issue and make more expressive parametric models, we incorporate the technical advantage of the Deep- $Q$ Network into DMRE, and further develop a computationally efficient method (DeepDMRE) with neural network-based Nash equilibria approximation. Extensive simulations are conducted to verify the effectiveness of the proposed methods. Especially, DeepDMRE outperforms DMRE, $Q$ -learning, LFU, and LRU, and the edge hit rate is improved by roughly 5%, 19%, 40%, and 35%, respectively, when the cache capacity reaches 1, 000 MB.
引用
收藏
页码:9595 / 9609
页数:15
相关论文
共 41 条
[1]   Fast Content Delivery via Distributed Caching and Small Cell Cooperation [J].
Ao, Weng Chon ;
Psounis, Konstantinos .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2018, 17 (05) :1048-1061
[2]   A Taxonomy and Survey of Edge Cloud Computing for Intelligent Transportation Systems and Connected Vehicles [J].
Arthurs, Peter ;
Gillam, Lee ;
Krause, Paul ;
Wang, Ning ;
Halder, Kaushik ;
Mouzakitis, Alexandros .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) :6206-6221
[3]  
Casgrain P, 2019, Arxiv, DOI arXiv:1904.10554
[4]   Efficient Multi-User Computation Offloading for Mobile-Edge Cloud Computing [J].
Chen, Xu ;
Jiao, Lei ;
Li, Wenzhong ;
Fu, Xiaoming .
IEEE-ACM TRANSACTIONS ON NETWORKING, 2016, 24 (05) :2827-2840
[5]   Next-Generation mm-Wave Small-Cell Networks: Multiple Access, Caching, and Resource Management [J].
Cui, Jingjing ;
Liu, Yuanwei ;
Ding, Zhiguo ;
Fan, Pingzhi ;
Nallanathan, Arumugam ;
Hanzo, Lajos .
IEEE VEHICULAR TECHNOLOGY MAGAZINE, 2020, 15 (01) :46-53
[6]   Deep Reinforcement Learning and Permissioned Blockchain for Content Caching in Vehicular Edge Computing and Networks [J].
Dai, Yueyue ;
Xu, Du ;
Zhang, Ke ;
Maharjan, Sabita ;
Zhang, Yan .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (04) :4312-4324
[7]   ARTIFICIAL INTELLIGENCE EMPOWERED EDGE COMPUTING AND CACHING FOR INTERNET OF VEHICLES [J].
Dai, Yueyue ;
Xu, Du ;
Maharjan, Sabita ;
Qiao, Guanhua ;
Zhang, Yan .
IEEE WIRELESS COMMUNICATIONS, 2019, 26 (03) :12-18
[8]   Reliability Enhancement for VR Delivery in Mobile-Edge Empowered Dual-Connectivity Sub-6 GHz and mmWave HetNets [J].
Gu, Zhuojia ;
Lu, Hancheng ;
Hong, Peilin ;
Zhang, Yongdong .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (04) :2210-2226
[9]   Nash Q-learning for general-sum stochastic games [J].
Hu, JL ;
Wellman, MP .
JOURNAL OF MACHINE LEARNING RESEARCH, 2004, 4 (06) :1039-1069
[10]   Video Caching, Analytics, and Delivery at the Wireless Edge: A Survey and Future Directions [J].
Jedari, Behrouz ;
Premsankar, Gopika ;
Illahi, Gazi ;
Di Francesco, Mario ;
Mehrabi, Abbas ;
Yla-Jaaski, Antti .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2021, 23 (01) :431-471