Non trust detection of decentralized federated learning based on historical gradient

被引:7
作者
Chen, Yikuan [1 ]
Liang, Li [1 ]
Gao, Wei [1 ]
机构
[1] Yunnan Normal Univ, Sch Informat Sci & Technol, Kunming, Peoples R China
关键词
Decentralized federated learning; Malicious detection; Historical gradient; POISONING ATTACKS; CLIENTS;
D O I
10.1016/j.engappai.2023.105888
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a paradigm of distributed machine learning, federated learning is widely used in various real scenarios due to its excellent privacy protection performance on preventing local data from being disclosed. However, the traditional federated learning has the defect that a third-party server aggregates the models of various users since it's difficult to guarantee the reliability of the third party, and multicentre phenomena frequently appeared in various applications, such as social networks, banking and finance, medical health, etc. Users can't be reassured in decentralization setting due to the mixture of malicious and untrustworthy ones among them. Although untrustworthy users are benign, they may be classified as the saboteurs because of poor efficiency performance in decentralized federated learning which is caused by missing or ambiguity of data. In this paper, we propose Decentralized Federated Learning Historical Gradient (DFedHG) approach to distinguish normal users, untrustworthy users and malicious users in the decentralized federated learning setting. Simultaneously, by means of DFedHG, malicious users are sub-divided into targetless attacks and targeted attacks, which is verified by adopting two types of data sets for confirmation. The experimental results show that the proposed approach achieves better performance compared with the conventional decentralized federated learning without untrustworthy users, and further present excellent differentiation of malicious users.
引用
收藏
页数:12
相关论文
共 44 条
[1]  
Al Mallah R, 2022, Arxiv, DOI arXiv:2101.10904
[2]   CONTRA: Defending Against Poisoning Attacks in Federated Learning [J].
Awan, Sana ;
Luo, Bo ;
Li, Fengjun .
COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 :455-475
[3]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[4]  
Banerji Nandan, 2022, Machine Intelligence and Soft Computing: Proceedings of ICMISC 2021. Advances in Intelligent Systems and Computing (1419), P9, DOI 10.1007/978-981-16-8364-0_2
[5]   Decentralized federated learning for extended sensing in 6G connected vehicles [J].
Barbieri, Luca ;
Savazzi, Stefano ;
Brambilla, Mattia ;
Nicoli, Monica .
VEHICULAR COMMUNICATIONS, 2022, 33
[6]  
Blanchard P, 2017, ADV NEUR IN, V30
[7]  
Cao XY, 2021, AAAI CONF ARTIF INTE, V35, P6885
[8]   Distributed statistical machine learning in adversarial settings: Byzantine gradient descent [J].
Chen, Yudong ;
Su, Lili ;
Xu, Jiaming .
Proceedings of the ACM on Measurement and Analysis of Computing Systems, 2017, 1 (02)
[9]  
Defazio A, 2014, ADV NEUR IN, V27
[10]  
Fung C., 2020, P INT S RES ATT INTR, P301