FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

被引:208
|
作者
Cao, Xiaoyu [1 ]
Fang, Minghong [2 ]
Liu, Jia [2 ]
Gong, Neil Zhenqiang [1 ]
机构
[1] Duke Univ, Durham, NC 27706 USA
[2] Ohio State Univ, Columbus, OH 43210 USA
基金
美国国家科学基金会;
关键词
D O I
10.14722/ndss.2021.24434
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Byzantine-robust federated learning aims to enable a service provider to learn an accurate global model when a bounded number of clients are malicious. The key idea of existing Byzantine-robust federated learning methods is that the service provider performs statistical analysis among the clients' local model updates and removes suspicious ones, before aggregating them to update the global model. However, malicious clients can still corrupt the global models in these methods via sending carefully crafted local model updates to the service provider. The fundamental reason is that there is no root of trust in existing federated learning methods, i.e., from the service provider's perspective, every client could be malicious. In this work, we bridge the gap via proposing FLTrust, a new federated learning method in which the service provider itself bootstraps trust. In particular, the service provider itself collects a clean small training dataset (called root dataset) for the learning task and the service provider maintains a model (called server model) based on it to bootstrap trust. In each iteration, the service provider first assigns a trust score to each local model update from the clients, where a local model update has a lower trust score if its direction deviates more from the direction of the server model update. Then, the service provider normalizes the magnitudes of the local model updates such that they lie in the same hyper-sphere as the server model update in the vector space. Our normalization limits the impact of malicious local model updates with large magnitudes. Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model. Our extensive evaluations on six datasets from different domains show that our FLTrust is secure against both existing attacks and strong adaptive attacks. For instance, using a root dataset with less than 100 examples, FLTrust under adaptive attacks with 40%-60% of malicious clients can still train global models that are as accurate as the global models trained by FedAvg under no attacks, where FedAvg is a popular method in non-adversarial settings.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] SEAR: Secure and Efficient Aggregation for Byzantine-Robust Federated Learning
    Zhao, Lingchen
    Jiang, Jianlin
    Feng, Bo
    Wang, Qian
    Shen, Chao
    Li, Qi
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (05) : 3329 - 3342
  • [32] FLForest: Byzantine-robust Federated Learning through Isolated Forest
    Wang, Tao
    Zhao, Bo
    Fang, Liming
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, ICPADS, 2022, : 296 - 303
  • [33] Byzantine-Robust and Communication-Efficient Personalized Federated Learning
    Zhang, Jiaojiao
    He, Xuechao
    Huang, Yue
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 26 - 39
  • [34] Byzantine-robust federated learning performance evaluation via distance-statistical aggregations
    Colosimo, Francesco
    Rocca, Giovanni
    ASSURANCE AND SECURITY FOR AI-ENABLED SYSTEMS, 2024, 13054
  • [35] Byzantine-robust federated learning via credibility assessment on non- IID data
    Zhai, Kun
    Ren, Qiang
    Wang, Junli
    Yan, Chungang
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2022, 19 (02) : 1659 - 1676
  • [36] FBR-FL: Fair and Byzantine-Robust Federated Learning via SPD Manifold
    Zhang, Tao
    Li, Haoshuo
    Liu, Teng
    Song, Anxiao
    Shen, Yulong
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT 1, 2025, 15031 : 395 - 409
  • [37] Byzantine-Robust and Privacy-Preserving Federated Learning With Irregular Participants
    Chen, Yinuo
    Tan, Wuzheng
    Zhong, Yijian
    Kang, Yulin
    Yang, Anjia
    Weng, Jian
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (21): : 35193 - 35205
  • [38] Better Together: Attaining the Triad of Byzantine-robust Federated Learning via Local Update Amplification
    Shen, Liyue
    Zhang, Yanjun
    Wang, Jingwei
    Bai, Guangdong
    PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 201 - 213
  • [39] BRFL: A blockchain-based byzantine-robust federated learning model
    Li, Yang
    Xia, Chunhe
    Li, Chang
    Wang, Tianbo
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2025, 196
  • [40] Communication-Efficient and Byzantine-Robust Differentially Private Federated Learning
    Li, Min
    Xiao, Di
    Liang, Jia
    Huang, Hui
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (08) : 1725 - 1729