Towards Intelligent Adaptive Edge Caching Using Deep Reinforcement Learning

被引:7
作者
Wang, Ting [1 ]
Deng, Yuxiang [1 ]
Mao, Jiawei [1 ]
Chen, Mingsong [1 ]
Liu, Gang [2 ]
Di, Jieming [3 ]
Li, Keqin [4 ]
机构
[1] East China Normal Univ, MoE Engn Res Ctr Software Hardware Codesign Techno, Shanghai Key Lab Trustworthy Comp, Shanghai 200062, Peoples R China
[2] Nokia Shanghai Bell Corp, Bell Labs, Shanghai 201206, Peoples R China
[3] Meta, Seatle, WA 98109 USA
[4] SUNY, Dept Comp Sci, New York, NY 10018 USA
关键词
Quality of experience; Deep learning; Cloud computing; Reinforcement learning; Servers; Mobile computing; Costs; Edge caching; deep reinforcement learning; quality of experience; MOBILE; FRAMEWORK; POLICY;
D O I
10.1109/TMC.2024.3361083
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The tremendous expansion of edge data traffic poses great challenges to network bandwidth and service responsiveness for mobile computing. Edge caching has emerged as a promising method to alleviate these issues by storing a portion of data at the network edge. However, existing caching approaches suffer from either poor caching efficiency with low content-hit ratio or unintelligence of caching policies lacking self-adjustability. In this article, we propose ICE, a novel Intelligent Edge Caching scheme using a deep reinforcement learning (DRL) method to capture specific valuable information from the requested data. With the benefit of our proposed popularity model based on Newton's law of cooling, ICE fully takes into account the popularity of the contents to be cached and leverages the formulated Markov decision model to decide whether or not the contents should be cached. Moreover, to further improve the caching efficiency, we propose a novel distributed multi-node caching framework, named DCCC, assisted by a multi-tiered caching hierarchy. Comprehensive experiments show that the single-node ICE scheme greatly improves the cache hit rate and contents exchanging time in comparison with both DRL-based and legacy approaches, and our distributed multi-node caching scheme DCCC further significantly improves the overall utilization of caching space.
引用
收藏
页码:9289 / 9303
页数:15
相关论文
共 41 条
[1]  
Abouaomar A, 2017, 2017 INTERNATIONAL CONFERENCE ON WIRELESS NETWORKS AND MOBILE COMMUNICATIONS (WINCOM), P181
[2]   Multi-Agent Deep Reinforcement Learning-Based Cooperative Edge Caching for Ultra-Dense Next-Generation Networks [J].
Chen, Shuangwu ;
Yao, Zhen ;
Jiang, Xiaofeng ;
Yang, Jian ;
Hanzo, Lajos .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2021, 69 (04) :2441-2456
[3]   PA-Cache: Evolving Learning-Based Popularity- Aware Content Caching in Edge Networks [J].
Fan, Qilin ;
Li, Xiuhua ;
Li, Jian ;
He, Qiang ;
Wang, Kai ;
Wen, Junhao .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2021, 18 (02) :1746-1757
[4]   Dynamic Semantic LFU Policy with Victim tracer (DSLV): A Customizing Technique for Client Cache [J].
Geetha, Krishnan ;
Gounden, N. Ammasai .
ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2017, 42 (02) :725-737
[5]  
Google, 2018, Tensorflow1.9.0
[6]  
Gursoy MC, 2020, MACHINE LEARNING FOR FUTURE WIRELESS COMMUNICATIONS, P439
[7]  
Im Y, 2018, 2018 52ND ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS)
[8]  
Jiang B., 2017, ACM SIGMETRICS Perform. Eval. Rev., V45, P24
[9]   A Novel Distributed Q-Learning Based Resource Reservation Framework for Facilitating D2D Content Access Requests in LTE-A Networks [J].
Kumar, Naveen ;
Swain, Siba Narayan ;
Murthy, C. Siva Ram .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2018, 15 (02) :718-731
[10]   Mobility-Aware Edge Caching and Computing in Vehicle Networks: A Deep Reinforcement Learning [J].
Le Thanh Tan ;
Hu, Rose Qingyang .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (11) :10190-10203