Communication-Efficient and Byzantine-Robust Federated Learning for Mobile Edge Computing Networks

被引:9
作者
Zhang, Zhuangzhuang [1 ,3 ]
Wl, Libing [1 ,3 ]
He, Debiao [1 ]
Li, Jianxin [4 ,5 ]
Cao, Shuqin [2 ]
Wu, Xianfeng [6 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan, Peoples R China
[2] Wuhan Univ, Wuhan, Peoples R China
[3] Guangdong Lab Artificial Intelligence & Digital E, Guangzhou, Peoples R China
[4] Deakin Univ, Sch Informat Technol, Data Sci, Geelong, Vic, Australia
[5] Deakin Univ, Sch Informat Technol, Smart Networks Lab, Geelong, Vic, Australia
[6] Jianghan Univ, Sch Artificial Intelligence, Wuhan, Peoples R China
来源
IEEE NETWORK | 2023年 / 37卷 / 04期
基金
中国国家自然科学基金;
关键词
Training; Threat modeling; Multi-access edge computing; Federated learning; Computational modeling; Data models; Servers; Machine learning;
D O I
10.1109/MNET.006.2200651
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning in mobile edge computing allows a larger number of devices to jointly train an accurate machine learning model without collecting local data from edge nodes. However, there are two major challenges to using federated learning for mobile edge computing. The first is that mobile edge computing networks only tolerate a limited communication overhead, that is, communication overhead between edge nodes, edge servers, and cloud servers cannot be excessive. Unfortunately, federated learning clients send a large number of local model update that do not meet realistic requirements. The second is that resource-constrained edge nodes are vulnera-ble to attacks, that is, these edge nodes are highly vulnerable to potential adversary compromise and can be used to launch Byzantine attacks. To address the aforementioned challenges, we propose a communication-efficient and Byzan-tine-robust federated learning for mobile edge computing networks named CBFL. Specifically, edge nodes compress local model updates by taking element-wise signs for local model updates. Then, edge nodes sends it to the edge server for local model aggregation. Finally, the cloud server uses a small evaluation dataset to evaluate the edge servers' local model aggregation results and utilizes the evaluation results for global model aggregation. Moreover, extensive experiments are also conducted to evaluate the performance of the proposed CBFL, and the results show that our CBFL can withstand Byzantine attacks while maintaining communication efficiency.
引用
收藏
页码:112 / 119
页数:8
相关论文
共 15 条
[1]   Task Offloading for Mobile Edge Computing in Software Defined Ultra-Dense Network [J].
Chen, Min ;
Hao, Yixue .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2018, 36 (03) :587-597
[2]   CRACAU: Byzantine Machine Learning Meets Industrial Edge Computing in Industry 5.0 [J].
Du, Anran ;
Shen, Yicheng ;
Zhang, Qinzi ;
Tseng, Lewis ;
Aloqaily, Moayad .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (08) :5435-5445
[3]   Byzantine Resistant Secure Blockchained Federated Learning at the Edge [J].
Li, Zonghang ;
Yu, Hongfang ;
Zhou, Tianyao ;
Luo, Long ;
Fan, Mochan ;
Xu, Zenglin ;
Sun, Gang .
IEEE NETWORK, 2021, 35 (04) :295-301
[4]   Keep Your Data Locally: Federated-Learning-Based Data Privacy Preservation in Edge Computing [J].
Liu, Gaoyang ;
Wang, Chen ;
Ma, Xiaoqiang ;
Yang, Yang .
IEEE NETWORK, 2021, 35 (02) :60-66
[5]  
Liu J., 2021, IEEE Transactions on Mobile Computing
[6]  
McMahan HB, 2017, PR MACH LEARN RES, V54, P1273
[7]   Fast Federated Learning by Balancing Communication Trade-Offs [J].
Nori, Milad Khademi ;
Yun, Sangseok ;
Kim, Il-Min .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2021, 69 (08) :5168-5182
[8]  
Reisizadeh A, 2020, PR MACH LEARN RES, V108, P2021
[9]  
Wang L., 2022, IEEE Trans. Mob. Comput.
[10]   Toward Resource-Efficient Federated Learning in Mobile Edge Computing [J].
Yu, Rong ;
Li, Peichun .
IEEE NETWORK, 2021, 35 (01) :148-155