Distributed Deep Multi-Agent Reinforcement Learning for Cooperative Edge Caching in Internet-of-Vehicles

被引:80
作者
Zhou, Huan [1 ,2 ]
Jiang, Kai [3 ]
He, Shibo [4 ]
Min, Geyong [5 ]
Wu, Jie [6 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710129, Peoples R China
[2] China Three Gorges Univ, Coll Comp & Informat Technol, Yichang 443002, Peoples R China
[3] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430000, Peoples R China
[4] Zhejiang Univ, Coll Control Sci & Technol, Hangzhou 310027, Peoples R China
[5] Univ Exeter, Coll Engn Math & Phys Sci, Dept Comp Sci, Exeter EX4 4QF, England
[6] Temple Univ, Dept Comp & Informat Sci, Philadelphia, PA 19122 USA
基金
中国国家自然科学基金;
关键词
Computer architecture; Delays; Costs; Backhaul networks; Reinforcement learning; Quality of service; Optimization; Edge caching; Internet-of-Vehicles; content delivery; cache replacement; multi-agent reinforcement learning;
D O I
10.1109/TWC.2023.3272348
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Edge caching is a promising approach to reduce duplicate content transmission in Internet-of-Vehicles (IoVs). Several Reinforcement Learning (RL) based edge caching methods have been proposed to improve the resource utilization and reduce the backhaul traffic load. However, they only obtain the local sub-optimal solution, as they neglect the influence from environments by other agents. This paper investigates the edge caching strategies with consideration of the content delivery and cache replacement by exploiting the distributed Multi-Agent Reinforcement Learning (MARL). A hierarchical edge caching architecture for IoVs is proposed and the corresponding problem is formulated with the goal to minimize the long-term content access cost in the system. Then, we extend the Markov Decision Process (MDP) in the single agent RL to the context of a multi-agent system, and tackle the corresponding combinatorial multi-armed bandit problem based on the framework of a stochastic game. Specifically, we firstly propose a Distributed MARL-based Edge caching method (DMRE), where each agent can adaptively learn its best behaviour in conjunction with other agents for intelligent caching. Meanwhile, we attempt to reduce the computation complexity of DMRE by parameter approximation, which legitimately simplifies the training targets. However, DMRE is enabled to represent and update the parameter by creating a lookup table, essentially a tabular-based method, which generally performs inefficiently in large-scale scenarios. To circumvent the issue and make more expressive parametric models, we incorporate the technical advantage of the Deep- $Q$ Network into DMRE, and further develop a computationally efficient method (DeepDMRE) with neural network-based Nash equilibria approximation. Extensive simulations are conducted to verify the effectiveness of the proposed methods. Especially, DeepDMRE outperforms DMRE, $Q$ -learning, LFU, and LRU, and the edge hit rate is improved by roughly 5%, 19%, 40%, and 35%, respectively, when the cache capacity reaches 1, 000 MB.
引用
收藏
页码:9595 / 9609
页数:15
相关论文
共 41 条
[11]   Intelligence-Empowered Mobile Edge Computing: Framework, Issues, Implementation, and Outlook [J].
Jiang, Kai ;
Sun, Chuan ;
Zhou, Huan ;
Li, Xiuhua ;
Dong, Mianxiong ;
Leung, Victor C. M. .
IEEE NETWORK, 2021, 35 (05) :74-82
[12]   Multi-Agent Reinforcement Learning for Cooperative Edge Caching in Internet of Vehicles [J].
Jiang, Kai ;
Zhou, Huan ;
Zeng, Deze ;
Wu, Jie .
2020 IEEE 17TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2020), 2020, :455-463
[13]   Multi-Agent Reinforcement Learning for Efficient Content Caching in Mobile D2D Networks [J].
Jiang, Wei ;
Feng, Gang ;
Qin, Shuang ;
Yum, Tak Shing Peter ;
Cao, Guohong .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2019, 18 (03) :1610-1622
[14]   Mobility-Aware Edge Caching and Computing in Vehicle Networks: A Deep Reinforcement Learning [J].
Le Thanh Tan ;
Hu, Rose Qingyang .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (11) :10190-10203
[15]   Deep Reinforcement Scheduling for Mobile Crowdsensing in Fog Computing [J].
Li, He ;
Ota, Kaoru ;
Dong, Mianxiong .
ACM TRANSACTIONS ON INTERNET TECHNOLOGY, 2019, 19 (02)
[16]   Hierarchical Edge Caching in Device-to-Device Aided Mobile Networks: Modeling, Optimization, and Design [J].
Li, Xiuhua ;
Wang, Xiaofei ;
Wan, Peng-Jun ;
Han, Zhu ;
Leung, Victor C. M. .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2018, 36 (08) :1768-1785
[17]   Resource Scheduling in Edge Computing: A Survey [J].
Luo, Quyuan ;
Hu, Shihong ;
Li, Changle ;
Li, Guanghui ;
Shi, Weisong .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2021, 23 (04) :2131-2165
[18]   Applications of Deep Reinforcement Learning in Communications and Networking: A Survey [J].
Luong, Nguyen Cong ;
Hoang, Dinh Thai ;
Gong, Shimin ;
Niyato, Dusit ;
Wang, Ping ;
Liang, Ying-Chang ;
Kim, Dong In .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2019, 21 (04) :3133-3174
[19]   Deep Reinforcement Learning for Cooperative Content Caching in Vehicular Edge Computing and Networks [J].
Qiao, Guanhua ;
Leng, Supeng ;
Maharjan, Sabita ;
Zhang, Yan ;
Ansari, Nirwan .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (01) :247-257
[20]   Data and Channel-Adaptive Sensor Scheduling for Federated Edge Learning via Over-the-Air Gradient Aggregation [J].
Su, Liqun ;
Lau, Vincent K. N. .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (03) :1640-1654