Communication-Efficient Federated Learning: A Second Order Newton-Type Method With Analog Over-the-Air Aggregation

被引:11
作者
Krouka, Mounssif [1 ]
Elgabli, Anis [1 ]
Ben Issaid, Chaouki [1 ]
Bennis, Mehdi [1 ]
机构
[1] Univ Oulu, Ctr Wireless Commun, Oulu 90014, Finland
来源
IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING | 2022年 / 6卷 / 03期
基金
芬兰科学院;
关键词
Convergence; Training; Privacy; Data models; Convex functions; Collaborative work; Atmospheric modeling; Distributed optimization; communication-efficient federated learning; second-order methods; analog-over-the-air aggregation; ADMM; STOCHASTIC GRADIENT DESCENT; EDGE;
D O I
10.1109/TGCN.2022.3173420
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Owing to their fast convergence, second-order Newton-type learning methods have recently received attention in the federated learning (FL) setting. However, current solutions are based on communicating the Hessian matrices from the devices to the parameter server, at every iteration, incurring a large number of communication rounds; calling for novel communication-efficient Newton-type learning methods. In this article, we propose a novel second-order Newton-type method that, similarly to its first-order counterpart, requires every device to share only a model-sized vector at each iteration while hiding the gradient and Hessian information. In doing so, the proposed approach is significantly more communication-efficient and privacy-preserving. Furthermore, by leveraging the over-the-air aggregation principle, our method inherits privacy guarantees and obtains much higher communication efficiency gains. In particular, we formulate the problem of learning the inverse Hessian-gradient product as a quadratic problem that is solved in a distributed way. The framework alternates between updating the inverse Hessian-gradient product using a few alternating direction method of multipliers (ADMM) steps, and updating the global model using Newton's method. Numerical results show that our proposed approach is more communication-efficient and scalable under noisy channels for different scenarios and across multiple datasets.
引用
收藏
页码:1862 / 1874
页数:13
相关论文
共 40 条
[21]   Communication-Efficient Federated Learning Over MIMO Multiple Access Channels [J].
Jeon, Yo-Seb ;
Amiri, Mohammad Mohammadi ;
Lee, Namyoon .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (10) :6547-6562
[22]   Communication-Efficient Federated Multitask Learning Over Wireless Networks [J].
Ma, Haoyu ;
Guo, Huayan ;
Lau, Vincent K. N. .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (01) :609-624
[23]   DONE: Distributed Approximate Newton-type Method for Federated Edge Learning [J].
Dinh, Canh T. ;
Tran, Nguyen H. ;
Nguyen, Tuan Dung ;
Bao, Wei ;
Balef, Amir Rezaei ;
Zhou, Bing B. ;
Zomaya, Albert Y. .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (11) :2648-2660
[24]   IRS Assisted Federated Learning: A Broadband Over-the-Air Aggregation Approach [J].
Zhang, Deyou ;
Xiao, Ming ;
Pang, Zhibo ;
Wang, Lihui ;
Poor, H. Vincent .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (05) :4069-4082
[25]   Communication-Efficient Federated Learning Over Capacity-Limited Wireless Networks [J].
Yun, Jaewon ;
Oh, Yongjeong ;
Jeon, Yo-Seb ;
Poor, H. Vincent .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2025, 11 (01) :621-637
[26]   Communication-Learning Co-Design for Differentially Private Over-the-Air Federated Learning With Device Sampling [J].
Hu, Zihao ;
Yan, Jia ;
Zhang, Ying-Jun Angela .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (11) :16788-16804
[27]   Communication-Efficient Federated Deep Learning With Layerwise Asynchronous Model Update and Temporally Weighted Aggregation [J].
Chen, Yang ;
Sun, Xiaoyan ;
Jin, Yaochu .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) :4229-4238
[28]   Data and Channel-Adaptive Sensor Scheduling for Federated Edge Learning via Over-the-Air Gradient Aggregation [J].
Su, Liqun ;
Lau, Vincent K. N. .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (03) :1640-1654
[29]   Riemannian Low-Rank Model Compression for Federated Learning With Over-the-Air Aggregation [J].
Xue, Ye ;
Lau, Vincent .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 :2172-2187
[30]   One-Bit Aggregation for Over-the-Air Federated Learning Against Byzantine Attacks [J].
Miao, Yifan ;
Ni, Wanli ;
Tian, Hui .
IEEE SIGNAL PROCESSING LETTERS, 2024, 31 :1024-1028