Robust Aggregation for Federated Learning by Minimum γ-Divergence Estimation

被引:3
|
作者
Li, Cen-Jhih [1 ]
Huang, Pin-Han [2 ]
Ma, Yi-Ting [1 ]
Hung, Hung [3 ]
Huang, Su-Yun [1 ]
机构
[1] Acad Sinica, Inst Stat Sci, Taipei 11529, Taiwan
[2] Natl Taiwan Univ, Data Sci Degree Program, Taipei 10617, Taiwan
[3] Natl Taiwan Univ, Inst Epidemiol & Prevent Med, Taipei 10055, Taiwan
关键词
byzantine problem; density power divergence; federated learning; gamma-divergence; influence function; robustness;
D O I
10.3390/e24050686
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Federated learning is a framework for multiple devices or institutions, called local clients, to collaboratively train a global model without sharing their data. For federated learning with a central server, an aggregation algorithm integrates model information sent from local clients to update the parameters for a global model. Sample mean is the simplest and most commonly used aggregation method. However, it is not robust for data with outliers or under the Byzantine problem, where Byzantine clients send malicious messages to interfere with the learning process. Some robust aggregation methods were introduced in literature including marginal median, geometric median and trimmed-mean. In this article, we propose an alternative robust aggregation method, named gamma-mean, which is the minimum divergence estimation based on a robust density power divergence. This gamma-mean aggregation mitigates the influence of Byzantine clients by assigning fewer weights. This weighting scheme is data-driven and controlled by the gamma value. Robustness from the viewpoint of the influence function is discussed and some numerical results are presented.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Robust Aggregation for Federated Learning
    Pillutla, Krishna
    Kakade, Sham M.
    Harchaoui, Zaid
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2022, 70 : 1142 - 1154
  • [2] Robust Aggregation Function in Federated Learning
    Taheri, Rahim
    Arabikhan, Farzad
    Gegov, Alexander
    Akbari, Negar
    ADVANCES IN INFORMATION SYSTEMS, ARTIFICIAL INTELLIGENCE AND KNOWLEDGE MANAGEMENT, ICIKS 2023, 2024, 486 : 168 - 175
  • [3] Byzantine-Robust Aggregation for Federated Learning with Reinforcement Learning
    Yan, Sizheng
    Du, Junping
    Xue, Zhe
    Li, Ang
    WEB AND BIG DATA, APWEB-WAIM 2024, PT IV, 2024, 14964 : 152 - 166
  • [4] Federated Learning Aggregation: New Robust Algorithms with Guarantees
    Ben Mansour, Adnan
    Carenini, Gaia
    Duplessis, Alexandre
    Naccache, David
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 721 - 726
  • [5] Robust Secure Aggregation with Lightweight Verification for Federated Learning
    Huang, Chao
    Yao, Yanqing
    Zhang, Xiaojun
    Teng, Da
    Wang, Yingdong
    Zhou, Lei
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 582 - 589
  • [6] RTGA: Robust ternary gradients aggregation for federated learning
    Yang, Chengang
    Xiao, Danyang
    Cao, Bokai
    Wu, Weigang
    INFORMATION SCIENCES, 2022, 616 : 427 - 443
  • [7] Robust minimum divergence estimation in a spatial Poisson point process
    Saigusa, Yusuke
    Eguchi, Shinto
    Komori, Osamu
    arXiv, 2023,
  • [8] Robust Independent Component Analysis via Minimum γ-Divergence Estimation
    Chen, Pengwen
    Hung, Hung
    Komori, Osamu
    Huang, Su-Yun
    Eguchi, Shinto
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2013, 7 (04) : 614 - 624
  • [9] Robust minimum divergence estimation in a spatial Poisson point process
    Saigusa, Yusuke
    Eguchi, Shinto
    Komori, Osamu
    ECOLOGICAL INFORMATICS, 2024, 81
  • [10] A Privacy Robust Aggregation Method Based on Federated Learning in the IoT
    Li, Qingtie
    Wang, Xuemei
    Ren, Shougang
    ELECTRONICS, 2023, 12 (13)