Two-way Delayed Updates with Model Similarity in Communication-Efficient Federated Learning

被引:0
作者
Mao, Yingchi [1 ,2 ]
Wang, Zibo [2 ]
Wu, Jun [2 ]
Shen, Lijuan [2 ]
Xu, Shufang [1 ,2 ]
Wu, Jie [3 ]
机构
[1] Hohai Univ, Key Lab Water Big Data Technol, Minist Water Resources, Nanjing, Peoples R China
[2] Hohai Univ, Sch Comp & Informat, Nanjing, Peoples R China
[3] Temple Univ, Ctr Networked Comp, Philadelphia, PA USA
来源
2023 19TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN 2023 | 2023年
关键词
federated learning; data heterogeneity; communication efficiency optimization; communication frequency;
D O I
10.1109/MSN60784.2023.00080
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The great achievement of IoT and the wide use of edge devices have brought explosive growth in data. The quality and scale of data determine the performances of machine learning models. Federated learning has attracted widespread attention for its ability to use isolated data and protect data privacy. Models can represent excellent generalization capabilities through federated training. However, the large number of devices and complex models involved in federated training exacerbate the communication costs and degrade the performance of the global model. Although existing approaches can reduce communication costs, they ignore the degradation of global model accuracy in a heterogeneous environment. To alleviate the huge communication costs in federated learning, this paper focuses on reducing upstream and downstream communication frequency while ensuring global model accuracy. We propose a Two-way Delayed Updates method with Model Similarity in Communication-Efficient Federated Learning (FedTDMS). FedTDMS employs personalized local computation to improve global model accuracy on heterogeneous data. Combining local update relevance check and global model compensation, FedTDMS reduces the communication frequency in Federated Learning. We conduct experiments on the MNIST-FL and CIFAR-10-FL datasets. Results show that FedTDMS can greatly optimize communication efficiency while maintaining good global model accuracy.
引用
收藏
页码:520 / 527
页数:8
相关论文
共 50 条
  • [41] SPinS-FL: Communication-Efficient Federated Subnetwork Learning
    Tsutsui, Masayoshi
    Takamaeda-Yamazaki, Shinya
    2023 IEEE 20TH CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2023,
  • [42] Accelerating Communication-Efficient Federated Multi-Task Learning With Personalization and Fairness
    Xie, Renyou
    Li, Chaojie
    Zhou, Xiaojun
    Dong, Zhaoyang
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2024, 35 (11) : 2239 - 2253
  • [43] Coded Federated Learning for Communication-Efficient Edge Computing: A Survey
    Zhang, Yiqian
    Gao, Tianli
    Li, Congduan
    Tan, Chee Wei
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2024, 5 : 4098 - 4124
  • [44] Byzantine-Robust and Communication-Efficient Personalized Federated Learning
    Zhang, Jiaojiao
    He, Xuechao
    Huang, Yue
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 26 - 39
  • [45] FedSC: Compatible Gradient Compression for Communication-Efficient Federated Learning
    Yu, Xinlei
    Gao, Zhipeng
    Zhao, Chen
    Mo, Zijia
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT I, 2024, 14487 : 360 - 379
  • [46] FedDQ: A communication-efficient federated learning approach for Internet of Vehicles
    Mo, Zijia
    Gao, Zhipeng
    Zhao, Chen
    Lin, Yijing
    JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 131
  • [47] Communication-Efficient Federated Learning for Connected Vehicles with Constrained Resources
    Shen, Shuaiqi
    Yu, Chong
    Zhang, Kuan
    Chen, Xi
    Chen, Huimin
    Ci, Song
    IWCMC 2021: 2021 17TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE (IWCMC), 2021, : 1636 - 1641
  • [48] LGCM: A Communication-Efficient Scheme for Federated Learning in Edge Devices
    Saadat, Nafas Gul
    Thahir, Sameer Mohamed
    Kumar, Santhosh G.
    Jereesh, A. S.
    2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
  • [49] SparseBatch: Communication-efficient Federated Learning With Partially Homomorphic Encryption
    Wang, Chong
    Wang, Jing
    Lou, Zheng
    Kong, Linghai
    Tao, Weisong
    Wang, Yun
    JOURNAL OF APPLIED SCIENCE AND ENGINEERING, 2025, 28 (08): : 1645 - 1656
  • [50] FedSL: A Communication-Efficient Federated Learning With Split Layer Aggregation
    Zhang, Weishan
    Zhou, Tao
    Lu, Qinghua
    Yuan, Yong
    Tolba, Amr
    Said, Wael
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (09): : 15587 - 15601