Ensemble Distillation Based Adaptive Quantization for Supporting Federated Learning in Wireless Networks

被引:11
作者
Liu, Yi-Jing [1 ,2 ]
Feng, Gang [1 ,2 ]
Niyato, Dusit [3 ]
Qin, Shuang [1 ,2 ]
Zhou, Jianhong [4 ]
Li, Xiaoqian [1 ,2 ]
Xu, Xinyi [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Natl Key Lab Commun, Chengdu 611731, Peoples R China
[2] Univ Elect Sci & Technol China, Yangtze Delta Reg Inst Huzhou, Huzhou 313001, Peoples R China
[3] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[4] Xihua Univ, Sch Comp & Software Engn, Chengdu 610039, Peoples R China
基金
美国国家科学基金会;
关键词
Federated learning; wireless network; adaptive quantization; ensemble distillation; heterogeneous models; AGGREGATION;
D O I
10.1109/TWC.2022.3222717
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning (FL) has become a promising technique for developing intelligent wireless networks. In traditional FL paradigms, local models are usually required to be homogeneous for aggregation. However, due to heterogeneous models coming with wireless sysTem heterogeneity, it is preferable for user equipments (UEs) to undertake appropriate amount of computing and/or data transmission work based on sysTem constraints. Meanwhile, considerable communication costs are incurred by model training, when a large number of UEs participate in FL and/or the transmitted models are large. Therefore, resource-efficient training schemes for heterogeneous models are essential for enabling FL-based intelligent wireless networks. In this paper, we propose an adaptive quantization scheme based on ensemble distillation (AQeD), to facilitate heterogeneous model training. We first partition and group the participating UEs into clusters, where the local models in specific clusters are homogeneous with different quantization levels. Then we propose an augmented loss function by jointly considering ensemble distillation loss, quantization levels and wireless resources constraints. In AQeD, model aggregations are performed at two levels: model aggregation for individual clusters and distillation loss aggregation for cluster ensembles. Numerical results show that the AQeD scheme can significantly reduce communication costs and training time in comparison with some state-of-the-art solutions.
引用
收藏
页码:4013 / 4027
页数:15
相关论文
共 50 条
[21]   Federated Learning Based Resource Allocation for Wireless Communication Networks [J].
Behmandpoor, Pourya ;
Patrinos, Panagiotis ;
Moonen, Marc .
2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, :1656-1660
[22]   Federated Learning in Heterogeneous Wireless Networks With Adaptive Mixing Aggregation and Computation Reduction [J].
Li, Jingxin ;
Liu, Xiaolan ;
Mahmoodi, Toktam .
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2024, 5 :2164-2182
[23]   Intrusion Detection for Wireless Edge Networks Based on Federated Learning [J].
Chen, Zhuo ;
Lv, Na ;
Liu, Pengfei ;
Fang, Yu ;
Chen, Kun ;
Pan, Wu .
IEEE ACCESS, 2020, 8 :217463-217472
[24]   Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [J].
Shen, Jiyuan ;
Yang, Wenzhuo ;
Chu, Zhaowei ;
Fan, Jiani ;
Niyato, Dusit ;
Lam, Kwok-Yan .
ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, :2034-2039
[25]   Smart algorithm in wireless networks for video streaming based on adaptive quantization [J].
Taha, Miran ;
Ali, Aree .
CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023, 35 (09)
[26]   Latency-Efficient Wireless Federated Learning With Quantization and Scheduling [J].
Yan, Zhigang ;
Li, Dong ;
Yu, Xianhua ;
Zhang, Zhichao .
IEEE COMMUNICATIONS LETTERS, 2022, 26 (11) :2621-2625
[27]   Adaptive Network Pruning for Wireless Federated Learning [J].
Liu, Shengli ;
Yu, Guanding ;
Yin, Rui ;
Yuan, Jiantao .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2021, 10 (07) :1572-1576
[28]   Hierarchical Federated Learning in MEC Networks with Knowledge Distillation [J].
Tuan Dung Nguyen ;
Ngoc Anh Tong ;
Nguyent, Binh P. ;
Quoc Viet Hung Nguyen ;
Phi Le Nguyen ;
Thanh Trung Huynh .
2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
[29]   Adaptive Sparsification and Quantization for Enhanced Energy Efficiency in Federated Learning [J].
Marnissi, Ouiame ;
El Hammouti, Hajar ;
Bergou, El Houcine .
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2024, 5 :4307-4321
[30]   FedACQ: adaptive clustering quantization of model parameters in federated learning [J].
Tian, Tingting ;
Shi, Hongjian ;
Ma, Ruhui ;
Liu, Yuan .
INTERNATIONAL JOURNAL OF WEB INFORMATION SYSTEMS, 2024, 20 (01) :88-110