Achieving Provable Byzantine Fault-tolerance in a Semi-honest Federated Learning Setting

被引:0
|
作者
Tang, Xingxing [1 ]
Gu, Hanlin [2 ]
Fan, Lixin [2 ]
Yang, Qiang [1 ,2 ]
机构
[1] HKUST, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[2] WeBank, WeBank AI Lab, Shenzhen, Peoples R China
来源
ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT II | 2023年 / 13936卷
关键词
Federated Learning; Byzantine Fault-tolerance; Semi-honest party;
D O I
10.1007/978-3-031-33377-4_32
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) is a suite of technology that allows multiple distributed participants to collaboratively build a global machine learning model without disclosing private datasets to each other. We consider an FL setting in which there may exist both a) semi-honest participants who aim to eavesdrop on other participants' private datasets; and b) Byzantine participants who aim to degrade the performances of the global model by submitting detrimental model updates. The proposed framework leverages the Expectation-Maximization algorithm first in E-step to estimate unknown participant membership, respectively, of Byzantine and benign participants, and in M-step to optimize the global model performance by excluding malicious model updates uploaded by Byzantine participants. One novel feature of the proposed method, which facilitates reliable detection of Byzantine participants even with HE or MPC protections, is to estimate participant membership based on the performances of a set of randomly generated candidate models evaluated by all participants. The extensive experiments and theoretical analysis demonstrate that our framework guarantees Byzantine Fault-tolerance in various federated learning settings with private-preserving mechanisms.
引用
收藏
页码:415 / 427
页数:13
相关论文
共 18 条
  • [11] Fed-MS: Fault Tolerant Federated Edge Learning with Multiple Byzantine Servers
    Qi, Senmao
    Ma, Hao
    Zou, Yifei
    Yuan, Yuan
    Li, Peng
    Yu, Dongxiao
    2024 IEEE 44TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS 2024, 2024, : 982 - 992
  • [12] Byzantine fault tolerance in distributed machine learning: a survey
    Bouhata, Djamila
    Moumen, Hamouma
    Mazari, Jocelyn Ahmed
    Bounceur, Ahcene
    JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE, 2024,
  • [13] Efficient multi-job federated learning scheduling with fault tolerance
    Fu, Boqian
    Chen, Fahao
    Pan, Shengli
    Li, Peng
    Su, Zhou
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2025, 18 (02)
  • [14] Keep It Simple: Fault Tolerance Evaluation of Federated Learning with Unreliable Clients
    Huang, Victoria
    Sohail, Shaleeza
    Mayo, Michael
    Botran, Tania Lorido
    Rodrigues, Mark
    Anderson, Chris
    Ooi, Melanie
    2023 IEEE 16TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING, CLOUD, 2023, : 141 - 143
  • [15] Raft Protocol for Fault Tolerance and Self-Recovery in Federated Learning
    Dautov, Rustem
    Husom, Erik Johannes
    PROCEEDINGS OF THE 2024 IEEE/ACM 19TH SYMPOSIUM ON SOFTWARE ENGINEERING FOR ADAPTIVE AND SELF-MANAGING SYSTEMS, SEAMS 2024, 2024, : 110 - 121
  • [16] Semi-supervised federated learning fault diagnosis method driven by teacher-student model consistency
    Wang, Guilong
    Pu, Chenjie
    Fu, Dongliang
    Zhang, Yi
    Yu, Jiongmin
    Hou, Yanru
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (05)
  • [17] Lcsa-fed: a low cost semi-asynchronous federated learning based on lag tolerance for services QoS prediction
    Cai, Lingru
    Liu, Yuelong
    Xu, Jianlong
    Jin, Mengqing
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2025, 28 (02):
  • [18] Multi-level federated learning based on cloud-edge-client collaboration and outlier-tolerance for fault diagnosis
    Ma, Xue
    He, Xiao
    Wu, Xinyu
    Wen, Chenglin
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2023, 34 (12)