FedCache: A Knowledge Cache-Driven Federated Learning Architecture for Personalized Edge Intelligence

被引:10
作者
Wu, Zhiyuan [1 ,2 ]
Sun, Sheng [1 ]
Wang, Yuwei [1 ]
Liu, Min [1 ,3 ]
Xu, Ke [3 ,4 ]
Wang, Wen [1 ,2 ]
Jiang, Xuefeng [1 ,2 ]
Gao, Bo [5 ]
Lu, Jinda [6 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, Beijing 100045, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 101408, Peoples R China
[3] Zhongguancun Lab, Beijing 100086, Peoples R China
[4] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100190, Peoples R China
[5] Beijing Jiaotong Univ, Engn Res Ctr Network Management Technol High Speed, Sch Comp & Informat Technol, Minist Educ, Beijing 100082, Peoples R China
[6] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 101127, Peoples R China
基金
中国国家自然科学基金;
关键词
Computer architecture; Training; Servers; Computational modeling; Data models; Adaptation models; Performance evaluation; Distributed architecture; edge computing; personalized federated learning; knowledge distillation; communication efficiency;
D O I
10.1109/TMC.2024.3361876
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Edge Intelligence (EI) allows Artificial Intelligence (AI) applications to run at the edge, where data analysis and decision-making can be performed in real-time and close to data sources. To protect data privacy and unify data silos distributed among end devices in EI, Federated Learning (FL) is proposed for collaborative training of shared AI models across multiple devices without compromising data privacy. However, the prevailing FL approaches cannot guarantee model generalization and adaptation on heterogeneous clients. Recently, Personalized Federated Learning (PFL) has drawn growing awareness in EI, as it enables a productive balance between local-specific training requirements inherent in devices and global-generalized optimization objectives for satisfactory performance. However, most existing PFL methods are based on the Parameters Interaction-based Architecture (PIA) represented by FedAvg, which suffers from unaffordable communication burdens due to large-scale parameters transmission between devices and the edge server. In contrast, Logits Interaction-based Architecture (LIA) allows to update model parameters with logits transfer and gains the advantages of communication lightweight and heterogeneous on-device model allowance compared to PIA. Nevertheless, previous LIA methods attempt to achieve satisfactory performance either relying on unrealistic public datasets or increasing communication overhead for additional information transmission other than logits. To tackle this dilemma, we propose a knowledge cache-driven PFL architecture, named FedCache, which reserves a knowledge cache on the server for fetching personalized knowledge from the samples with similar hashes to each given on-device sample. During the training phase, ensemble distillation is applied to on-device models for constructive optimization with personalized knowledge transferred from the server-side knowledge cache. Empirical experiments on four datasets demonstrate that FedCache achieves comparable performance with state-of-art PFL approaches, with more than two orders of magnitude improvements in communication efficiency.
引用
收藏
页码:9368 / 9382
页数:15
相关论文
共 51 条
[1]   Federated Learning for Healthcare: Systematic Review and Architecture Proposal [J].
Antunes, Rodolfo Stoffel ;
da Costa, Cristiano Andre ;
Kuederle, Arne ;
Yari, Imrana Abdullahi ;
Eskofier, Bjoern .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
[2]   Communication-Efficient and Model-Heterogeneous Personalized Federated Learning via Clustered Knowledge Transfer [J].
Cho, Yae Jee ;
Wang, Jianyu ;
Chirvolu, Tarun ;
Joshi, Gauri .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2023, 17 (01) :234-247
[3]  
Darlow L N, 2018, arXiv
[4]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[5]  
Diao E., 2021, P INT C LEARN REPR, P1
[6]  
Dinh CT, 2020, ADV NEUR IN, V33
[7]   EdgeRec: Recommender System on Edge in Mobile Taobao [J].
Gong, Yu ;
Jiang, Ziwen ;
Feng, Yufei ;
Hu, Binbin ;
Zhao, Kaiqi ;
Liu, Qingwen ;
Ou, Wenwu .
CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, :2477-2484
[8]   Knowledge Distillation: A Survey [J].
Gou, Jianping ;
Yu, Baosheng ;
Maybank, Stephen J. ;
Tao, Dacheng .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (06) :1789-1819
[9]  
He CY, 2020, ADV NEUR IN, V33
[10]  
He CY, 2020, Arxiv, DOI arXiv:2007.13518