FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack

被引:0
作者
Shiwei Lu
Ruihu Li
Wenbin Liu
机构
[1] Air Force Engineering University,Fundamentals Department
[2] Guangzhou University,Institute of Advanced Computational Science and Technology
来源
Frontiers of Computer Science | 2024年 / 18卷
关键词
federated learning; privacy protection; adversarial attacks; aggregated rule; correctness verification;
D O I
暂无
中图分类号
学科分类号
摘要
Federated learning (FL) has emerged to break data-silo and protect clients’ privacy in the field of artificial intelligence. However, deep leakage from gradient (DLG) attack can fully reconstruct clients’ data from the submitted gradient, which threatens the fundamental privacy of FL. Although cryptology and differential privacy prevent privacy leakage from gradient, they bring negative effect on communication overhead or model performance. Moreover, the original distribution of local gradient has been changed in these schemes, which makes it difficult to defend against adversarial attack. In this paper, we propose a novel federated learning framework with model decomposition, aggregation and assembling (FedDAA), along with a training algorithm, to train federated model, where local gradient is decomposed into multiple blocks and sent to different proxy servers to complete aggregation. To bring better privacy protection performance to FedDAA, an indicator is designed based on image structural similarity to measure privacy leakage under DLG attack and an optimization method is given to protect privacy with the least proxy servers. In addition, we give defense schemes against adversarial attack in FedDAA and design an algorithm to verify the correctness of aggregated results. Experimental results demonstrate that FedDAA can reduce the structural similarity between the reconstructed image and the original image to 0.014 and remain model convergence accuracy as 0.952, thus having the best privacy protection performance and model training effect. More importantly, defense schemes against adversarial attack are compatible with privacy protection in FedDAA and the defense effects are not weaker than those in the traditional FL. Moreover, verification algorithm of aggregation results brings about negligible overhead to FedDAA.
引用
收藏
相关论文
共 62 条
[1]  
So J(2021)Byzantine-resilient secure federated learning IEEE Journal on Selected Areas in Communications 39 2168-2181
[2]  
Güler B(2021)Privacy preserving machine learning with homomorphic encryption and federated learning Future Internet 13 94-3469
[3]  
Avestimehr A S(2020)Federated learning with differential privacy: algorithms and performance analysis IEEE Transactions on Information Forensics and Security 15 3454-8853
[4]  
Fang H(2021)Local differential privacy-based federated learning for internet of things IEEE Internet of Things Journal 8 8836-926
[5]  
Qian Q(2020)VerifyNet: secure and verifiable federated learning IEEE Transactions on Information Forensics and Security 15 911-144
[6]  
Wei K(2020)EaSTFLy: efficient and secure ternary federated learning Computers & Security 94 101824-1345
[7]  
Li J(2020)Highly efficient federated learning with strong privacy preservation in cloud computing Computers & Security 96 101889-210
[8]  
Ding M(2020)Generative adversarial networks Communications of the ACM 63 139-332
[9]  
Ma C(2018)Privacy-preserving deep learning via additively homomorphic encryption IEEE Transactions on Information Forensics and Security 13 1333-4596
[10]  
Yang H H(2021)Advances and open problems in federated learning Foundations and Trends® in Machine Learning 14 1-432