Adaptive Federated Learning With Negative Inner Product Aggregation

被引:17
作者
Deng, Wu [1 ]
Chen, Xintao [1 ]
Li, Xinyan [1 ]
Zhao, Huimin [1 ]
机构
[1] Civil Aviat Univ China, Coll Elect Informat & Automat, Tianjin 300300, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 04期
基金
中国国家自然科学基金;
关键词
Training; Convergence; Federated learning; Adaptation models; Performance evaluation; Data models; Servers; Adaptive federal learning; communication optimization; negative inner product aggregation; workload prediction;
D O I
10.1109/JIOT.2023.3312059
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) represents a distributed machine learning approach that leverages a centralized server to train models while keeping the data on edge devices isolated. FL has the benefits of preserving data privacy and improving model accuracy. However, the occurrence of unexpected device exits during model training can severely impact the performance of the models. To address the communication overhead issue and accelerate model convergence, a novel adaptive FL with a negative inner product aggregation approach, namely, NIPAFed is proposed in this article. The NIPAFed leverages a congestion control algorithm inspired by TCP, known as additive multiplication subtraction strategy, to adaptively predict the workload of devices based on historical workload. So NIPAFed effectively mitigates the impact of stragglers on the training process. Additionally, to reduce communication overhead and latency, a negative inner product aggregation strategy is employed to accelerate model convergence and minimize the number of communication rounds required. The convergence of the model is also analyzed theoretically. The validity of NIPAFed is tested on federated public data sets and the NIPAFed is compared with some algorithms. The experimental results clearly demonstrate the superiority of the NIPAFed in terms of performance. By reducing device dropouts and minimizing the communication rounds, the NIPAFed effectively controls the communication overhead while the convergence is ensured.
引用
收藏
页码:6570 / 6581
页数:12
相关论文
共 37 条
[1]   FedMCCS: Multicriteria Client Selection Model for Optimal IoT Federated Learning [J].
AbdulRahman, Sawsan ;
Tout, Hanine ;
Mourad, Azzam ;
Talhi, Chamseddine .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (06) :4723-4735
[2]  
Bonawitz K. A., 2019, Proceedings of machine learning and systems, P374
[3]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[4]  
Caldas S, 2019, Arxiv, DOI arXiv:1812.07210
[5]   Hyperspectral Image Classification Based on Fusing S3-PCA, 2D-SSA and Random Patch Network [J].
Chen, Huayue ;
Wang, Tingting ;
Chen, Tao ;
Deng, Wu .
REMOTE SENSING, 2023, 15 (13)
[6]   Communication-Efficient Federated Deep Learning With Layerwise Asynchronous Model Update and Temporally Weighted Aggregation [J].
Chen, Yang ;
Sun, Xiaoyan ;
Jin, Yaochu .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) :4229-4238
[7]   ANALYSIS OF THE INCREASE AND DECREASE ALGORITHMS FOR CONGESTION AVOIDANCE IN COMPUTER-NETWORKS [J].
CHIU, DM ;
JAIN, R .
COMPUTER NETWORKS AND ISDN SYSTEMS, 1989, 17 (01) :1-14
[8]   AUCTION: Automated and Quality-Aware Client Selection Framework for Efficient Federated Learning [J].
Deng, Yongheng ;
Lyu, Feng ;
Ren, Ju ;
Wu, Huaqing ;
Zhou, Yuezhi ;
Zhang, Yaoxue ;
Shen, Xuemin .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (08) :1996-2009
[9]   Astraea: Self-balancing Federated Learning for Improving Classification Accuracy of Mobile Deep Learning Applications [J].
Duan, Moming ;
Liu, Duo ;
Chen, Xianzhang ;
Tan, Yujuan ;
Ren, Jinting ;
Qiao, Lei ;
Liang, Liang .
2019 IEEE 37TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2019), 2019, :246-254
[10]  
Ferdinand N, 2020, Arxiv, DOI arXiv:2006.05752