Communication-Efficient Federated Deep Reinforcement Learning Based Cooperative Edge Caching in Fog Radio Access Networks

被引:0
作者
Zhang, Min [1 ]
Jiang, Yanxiang [1 ]
Zheng, Fu-Chun [1 ,2 ]
Wang, Dongming [1 ]
Bennis, Mehdi [3 ]
Jamalipour, Abbas [4 ]
You, Xiaohu [1 ]
机构
[1] Southeast Univ, Natl Mobile Commun Res Lab, Nanjing 210096, Peoples R China
[2] Harbin Inst Technol, Sch Elect & Informat Engn, Shenzhen 518055, Peoples R China
[3] Univ Oulu, Ctr Wireless Commun, Oulu 90014, Finland
[4] Univ Sydney, Sch Elect & Comp Engn, Sydney, NSW 2006, Australia
基金
中国国家自然科学基金;
关键词
Training; Servers; Delays; Computational modeling; Data models; Cooperative caching; Convergence; Solid modeling; Radio access networks; Load modeling; Fog radio access networks; cooperative edge caching; dueling DQN; deep reinforcement learning; federated learning; quantization; COMPREHENSIVE SURVEY; OPTIMIZATION; ALGORITHM;
D O I
10.1109/TWC.2024.3467285
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this paper, the cooperative edge caching problem is studied in fog radio access networks (F-RANs). Given the non-deterministic polynomial hard (NP-hard) nature of the problem, a dueling deep Q network (Dueling DQN) based caching update algorithm is proposed to make an optimal caching decision by learning the dynamic network environment. In order to protect user data privacy and solve the problem of slow convergence of the single deep reinforcement learning (DRL) model training, we propose a communication-efficient federated deep reinforcement learning (CE-FDRL) method to implement cooperative training of models from multiple fog access points (F-APs) in F-RANs. To address the excessive consumption of communication resources caused by model transmission, we propose to prune and quantize the shared DRL models to reduce the number of transferred model parameters. The communication interval is increased and the communication round is reduced by periodic model aggregation. The global convergence and computational complexity of our proposed method are also analyzed. Simulation results verify that our proposed method can offer better performance in reducing user request delay and improving cache hit rate and the transmitted parameters of our proposed method can drop to 60% compared to the existing benchmark schemes. Our proposed method is also shown to have faster training speed and higher communication efficiency.
引用
收藏
页码:18409 / 18422
页数:14
相关论文
共 41 条
[41]   Cooperative Edge Caching: A Multi-Agent Deep Learning Based Approach [J].
Zhang, Yuming ;
Feng, Bohao ;
Quan, Wei ;
Tian, Aleteng ;
Sood, Keshav ;
Lin, Youfang ;
Zhang, Hongke .
IEEE ACCESS, 2020, 8 :133212-133224