Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks

被引:0
|
作者
Zhou, Guanqiang [1 ,2 ]
Xu, Ping [3 ]
Wang, Yue [4 ]
Tian, Zhi [1 ]
机构
[1] George Mason Univ, Dept Elect & Comp Engn, Fairfax, VA 22030 USA
[2] Univ Iowa, Dept Elect & Comp Engn, Iowa City, IA 52242 USA
[3] Univ Texas Rio Grande Valley, Dept Elect & Comp Engn, Edinburg, TX 78539 USA
[4] Georgia State Univ, Dept Comp Sci, Atlanta, GA 30303 USA
基金
美国国家科学基金会;
关键词
NIST; Robustness; Distance learning; Computer aided instruction; Computational modeling; Convergence; Servers; Byzantine attacks; distributed learning; distributional shifts; norm-based screening (NBS); Wasserstein distance; OPTIMIZATION; MODELS;
D O I
10.1109/TNNLS.2024.3436149
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result. In this article, we propose a new research direction that jointly considers distributional shifts and Byzantine attacks. We illuminate the major challenges in addressing these two issues simultaneously. Accordingly, we design a new algorithm that equips distributed learning with both distributional robustness and Byzantine robustness. Our algorithm is built on recent advances in distributionally robust optimization (DRO) as well as norm-based screening (NBS), a robust aggregation scheme against Byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against Byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of Byzantine nodes is (1/3) or higher, instead of (1/2), which is the common belief in current literature. The experimental results verify our theoretical findings (on the breakpoint of NBS and others) and also demonstrate the effectiveness of our algorithm against both robustness issues, justifying our choice of NBS over other widely used robust aggregation schemes. To the best of our knowledge, this is the first work to address distributional shifts and Byzantine attacks simultaneously.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] RobustFL: Robust Federated Learning Against Poisoning Attacks in Industrial IoT Systems
    Zhang, Jiale
    Ge, Chunpeng
    Hu, Feng
    Chen, Bing
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (09) : 6388 - 6397
  • [42] Learning Robust to Distributional Uncertainties and Adversarial Data
    Sadeghi, Alireza
    Wang, Gang
    Giannakis, Georgios B.
    IEEE JOURNAL ON SELECTED AREAS IN INFORMATION THEORY, 2024, 5 : 105 - 122
  • [43] Optimal Byzantine Attacks on Distributed Detection in Tree-based Topologies
    Kailkhura, Bhavya
    Brahma, Swastik
    Varshney, Pramod K.
    2013 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS (ICNC), 2013,
  • [44] FedRDF: A Robust and Dynamic Aggregation Function Against Poisoning Attacks in Federated Learning
    Campos, Enrique Marmol
    Gonzalez-Vidal, Aurora
    Hernandez-Ramos, Jose L.
    Skarmeta, Antonio
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2025, 13 (01) : 48 - 67
  • [45] TrustBandit: Optimizing Client Selection for Robust Federated Learning Against Poisoning Attacks
    Deressa, Biniyam
    Hasan, M. Anwar
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS, INFOCOM WKSHPS 2024, 2024,
  • [46] Mitigation of Byzantine Attacks on Distributed Detection Systems Using Audit Bits
    Hashlamoun, Wael
    Brahma, Swastik
    Varshney, Pramod K.
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2018, 4 (01): : 18 - 32
  • [47] Evaluating Security and Robustness for Split Federated Learning Against Poisoning Attacks
    Wu, Xiaodong
    Yuan, Henry
    Li, Xiangman
    Ni, Jianbing
    Lu, Rongxing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 175 - 190
  • [48] SEAR: Secure and Efficient Aggregation for Byzantine-Robust Federated Learning
    Zhao, Lingchen
    Jiang, Jianlin
    Feng, Bo
    Wang, Qian
    Shen, Chao
    Li, Qi
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (05) : 3329 - 3342
  • [49] Communication-Efficient and Byzantine-Robust Distributed Stochastic Learning with Arbitrary Number of Corrupted Workers
    Jian Xu
    Tong, Xinyi
    Huang, Shao-Lun
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 5415 - 5420
  • [50] A Robust Approach for Securing Audio Classification Against Adversarial Attacks
    Esmaeilpour, Mohammad
    Cardinal, Patrick
    Koerich, Alessandro
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 2147 - 2159