PP-QADMM: A Dual-Driven Perturbation and Quantized ADMM for Privacy Preserving and Communication-Efficient Federated Learning

被引:0
作者
Elgabli, Anis [1 ,2 ]
机构
[1] King Fahd Univ Petr & Minerals, Ind & Syst Engn Dept, Dhahran 31261, Saudi Arabia
[2] King Fahd Univ Petr & Minerals, Interdisciplinary Res Ctr Commun Syst & Sensing, Dhahran 31261, Saudi Arabia
来源
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY | 2025年 / 6卷
关键词
Privacy; Convergence; Quantization (signal); Computational modeling; Perturbation methods; Costs; Convex functions; Federated learning; Optimization; Noise; Communication-efficient federated learning; differential privacy; ADMM; stochastic quantization; secure aggregation; CONSENSUS; OPTIMIZATION; ALGORITHM;
D O I
10.1109/OJCOMS.2025.3566464
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This article presents Privacy-Preserving and Quantized ADMM (PP-QADMM), a novel federated learning (FL) algorithm that is both privacy-preserving and communication-efficient, built upon the Alternating Direction Method of Multipliers (ADMM). PP-QADMM enhances privacy through three core mechanisms. First, dual variables are randomly initialized at the workers' side. Second, these dual variables are updated solely by the workers throughout the learning process. Third, only a combined representation of the primal and dual variables is transmitted to the parameter server (PS). This design prevents the PS from performing model inversion, as the individual model updates are obfuscated within the transmitted combination. The algorithm effectively merges principles from differential privacy (DP) and secure aggregation. Specifically, the dual variables act as perturbation noise, ensuring that each worker's model update satisfies (& varepsilon;,delta) -DP. Importantly, during aggregation at the PS, this noise cancels out, enabling an accurate recovery of the quantized global model while safeguarding individual contributions. PP-QADMM inherits the theoretical privacy guarantees of DP, yet with no loss in the performance, and inherits the secure aggregation capability of multi-party computation (MPC) technique, yet without incurring additional communication or computation overhead. To further improve communication efficiency, workers quantize their updates before transmission at each iteration. The quantization scheme is designed such that, as k ->infinity , the quantization error vanishes asymptotically. We provide a rigorous theoretical proof of convergence, showing that PP-QADMM converges to the optimal solution for convex problems while achieving a convergence rate comparable to standard ADMM, but with significantly lower communication and energy costs, and robust privacy protection. Extensive numerical experiments on a convex linear regression task validate the effectiveness of PP-QADMM. On a publicly available dataset with 100 workers, our results show that PP-QADMM transmits only 1/16th the number of bits required by standard ADMM, while achieving a loss value of 1x10(-10 ), a level of accuracy unattainable with DP-ADMM alone, due to its inherent utility-privacy trade-off.
引用
收藏
页码:4156 / 4175
页数:20
相关论文
共 79 条
[1]   Wireless Federated Distillation for Distributed Edge Learning with Heterogeneous Data [J].
Ahn, Jin-Hyun ;
Simeone, Osvaldo ;
Kang, Joonhyuk .
2019 IEEE 30TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (PIMRC), 2019, :1138-1143
[2]  
Alistarh D, 2017, ADV NEUR IN, V30
[3]   Asynchronous Saddle Point Algorithm for Stochastic Optimization in Heterogeneous Networks [J].
Bedi, Amrit Singh ;
Koppel, Alec ;
Rajawat, Ketan .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (07) :1742-1757
[4]  
Bernstein J, 2018, PR MACH LEARN RES, V80
[5]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[6]   Distributed optimization and statistical learning via the alternating direction method of multipliers [J].
Boyd S. ;
Parikh N. ;
Chu E. ;
Peleato B. ;
Eckstein J. .
Foundations and Trends in Machine Learning, 2010, 3 (01) :1-122
[7]   Multi-Agent Distributed Optimization via Inexact Consensus ADMM [J].
Chang, Tsung-Hui ;
Hong, Mingyi ;
Wang, Xiangfeng .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (02) :482-497
[8]  
Chaudhuri K., 2008, P ADV NEUR INF PROC, V21, P1
[9]  
Chaudhuri K, 2011, J MACH LEARN RES, V12, P1069
[10]   ESB-FL: Efficient and Secure Blockchain-Based Federated Learning With Fair Payment [J].
Chen, Biwen ;
Zeng, Honghong ;
Xiang, Tao ;
Guo, Shangwei ;
Zhang, Tianwei ;
Liu, Yang .
IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) :761-774