Digital Twin-Assisted Knowledge Distillation Framework for Heterogeneous Federated Learning

被引:12
作者
Wang, Xiucheng [1 ]
Cheng, Nan [1 ]
Ma, Longfei [1 ]
Sun, Ruijin [1 ]
Chai, Rong [2 ]
Lu, Ning [3 ]
机构
[1] Xidian Univ, Sch Telecommun Engn, Xian 710071, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Mobile Commun Technol, Chongqing 400065, Peoples R China
[3] Queens Univ, Dept Elect & Comp Engn, Kingston, ON K7L 3N6, Canada
基金
中国国家自然科学基金;
关键词
federated learning; digital twin; knowl-edge distillation; heterogeneity; Q-learning; convex optimization; SECURITY;
D O I
10.23919/JCC.2023.02.005
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
In this paper, to deal with the heterogeneity in federated learning (FL) systems, a knowledge distil-lation (KD) driven training framework for FL is pro-posed, where each user can select its neural network model on demand and distill knowledge from a big teacher model using its own private dataset. To over-come the challenge of train the big teacher model in resource limited user devices, the digital twin (DT) is exploit in the way that the teacher model can be trained at DT located in the server with enough computing re-sources. Then, during model distillation, each user can update the parameters of its model at either the phys-ical entity or the digital agent. The joint problem of model selection and training offloading and resource allocation for users is formulated as a mixed integer programming (MIP) problem. To solve the problem, Q-learning and optimization are jointly used, where Q-learning selects models for users and determines whether to train locally or on the server, and optimiza-tion is used to allocate resources for users based on the output of Q-learning. Simulation results show the proposed DT-assisted KD framework and joint opti-mization method can significantly improve the average accuracy of users while reducing the total delay.
引用
收藏
页码:61 / 78
页数:18
相关论文
共 49 条
[1]   Multi-frame Scheduling for Federated Learning over Energy-Efficient 6G Wireless Networks [J].
Beitollahi, Mahdi ;
Lu, Ning .
IEEE INFOCOM 2022 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2022,
[2]  
Brendan McMahan H., 2016, ABS160205629 CORR
[3]   Knowledge Distillation with the Reused Teacher Classifier [J].
Chen, Defang ;
Mei, Jian-Ping ;
Zhang, Hailin ;
Wang, Can ;
Feng, Yan ;
Chen, Chun .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :11923-11932
[4]  
Chen DF, 2021, AAAI CONF ARTIF INTE, V35, P7028
[5]   6G service-oriented space-air-ground integrated network: A survey [J].
Cheng, Nan ;
He, Jingchao ;
Yin, Zhisheng ;
Zhou, Conghao ;
Wu, Huaqing ;
Lyu, Feng ;
Zhou, Haibo ;
Shen, Xuemin .
CHINESE JOURNAL OF AERONAUTICS, 2022, 35 (09) :1-18
[6]   Space/Aerial-Assisted Computing Offloading for IoT Applications: A Learning-Based Approach [J].
Cheng, Nan ;
Lyu, Feng ;
Quan, Wei ;
Zhou, Conghao ;
He, Hongli ;
Shi, Weisen ;
Shen, Xuemin .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (05) :1117-1129
[7]   Improving Federated Learning With Quality-Aware User Incentive and Auto-Weighted Model Aggregation [J].
Deng, Yongheng ;
Lyu, Feng ;
Ren, Ju ;
Chen, Yi-Chao ;
Yang, Peng ;
Zhou, Yuezhi ;
Zhang, Yaoxue .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) :4515-4529
[8]   AUCTION: Automated and Quality-Aware Client Selection Framework for Efficient Federated Learning [J].
Deng, Yongheng ;
Lyu, Feng ;
Ren, Ju ;
Wu, Huaqing ;
Zhou, Yuezhi ;
Zhang, Yaoxue ;
Shen, Xuemin .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (08) :1996-2009
[9]   FAIR: Quality-Aware Federated Learning with Precise User Incentive and Model Aggregation [J].
Deng, Yongheng ;
Lyu, Feng ;
Ren, Ju ;
Chen, Yi-Chao ;
Yang, Peng ;
Zhou, Yuezhi ;
Zhang, Yaoxue .
IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
[10]  
Elsken T, 2019, J MACH LEARN RES, V20