Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks

被引:0
|
作者
Zhou, Guanqiang [1 ,2 ]
Xu, Ping [3 ]
Wang, Yue [4 ]
Tian, Zhi [1 ]
机构
[1] George Mason Univ, Dept Elect & Comp Engn, Fairfax, VA 22030 USA
[2] Univ Iowa, Dept Elect & Comp Engn, Iowa City, IA 52242 USA
[3] Univ Texas Rio Grande Valley, Dept Elect & Comp Engn, Edinburg, TX 78539 USA
[4] Georgia State Univ, Dept Comp Sci, Atlanta, GA 30303 USA
基金
美国国家科学基金会;
关键词
NIST; Robustness; Distance learning; Computer aided instruction; Computational modeling; Convergence; Servers; Byzantine attacks; distributed learning; distributional shifts; norm-based screening (NBS); Wasserstein distance; OPTIMIZATION; MODELS;
D O I
10.1109/TNNLS.2024.3436149
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result. In this article, we propose a new research direction that jointly considers distributional shifts and Byzantine attacks. We illuminate the major challenges in addressing these two issues simultaneously. Accordingly, we design a new algorithm that equips distributed learning with both distributional robustness and Byzantine robustness. Our algorithm is built on recent advances in distributionally robust optimization (DRO) as well as norm-based screening (NBS), a robust aggregation scheme against Byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against Byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of Byzantine nodes is (1/3) or higher, instead of (1/2), which is the common belief in current literature. The experimental results verify our theoretical findings (on the breakpoint of NBS and others) and also demonstrate the effectiveness of our algorithm against both robustness issues, justifying our choice of NBS over other widely used robust aggregation schemes. To the best of our knowledge, this is the first work to address distributional shifts and Byzantine attacks simultaneously.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Differentially Private Byzantine-Robust Federated Learning
    Ma, Xu
    Sun, Xiaoqian
    Wu, Yuduo
    Liu, Zheli
    Chen, Xiaofeng
    Dong, Changyu
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) : 3690 - 3701
  • [32] CB-DSL: Communication-Efficient and Byzantine-Robust Distributed Swarm Learning on Non-i.i.d. Data
    Fan, Xin
    Wang, Yue
    Huo, Yan
    Tian, Zhi
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (01) : 322 - 334
  • [33] Ensuring Secure Platooning of Constrained Intelligent and Connected Vehicles Against Byzantine Attacks: A Distributed MPC Framework
    Wei, Henglai
    Zhang, Hui
    AI-Haddad, Kamal
    Shi, Yang
    ENGINEERING, 2024, 33 : 35 - 46
  • [34] Robust and Resilient Distributed Optimal Frequency Control for Microgrids Against Cyber Attacks
    Liu, Yun
    Li, Yuanzheng
    Wang, Yu
    Zhang, Xian
    Gooi, Hoay Beng
    Xin, Huanhai
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (01) : 375 - 386
  • [35] Toward Byzantine-Robust Distributed Learning for Sentiment Classification on Social Media Platform
    Zhang, Heyi
    Wu, Jun
    Pan, Qianqian
    Bashir, Ali Kashif
    Omar, Marwan
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, : 1 - 11
  • [36] Robust Audio Watermarking Against Manipulation Attacks Based on Deep Learning
    Wen, Shuangbing
    Zhang, Qishan
    Hu, Tao
    Li, Jun
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 126 - 130
  • [37] Distributed Quantized Detection of Sparse Signals Under Byzantine Attacks
    Quan, Chen
    Han, Yunghsiang S.
    Geng, Baocheng
    Varshney, Pramod K.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 57 - 69
  • [38] FedMP: A multi-pronged defense algorithm against Byzantine poisoning attacks in federated learning
    Zhao, Kai
    Wang, Lina
    Yu, Fangchao
    Zeng, Bo
    Pang, Zhi
    COMPUTER NETWORKS, 2025, 257
  • [39] Byzantine-Robust Federated Learning through Dynamic Clustering
    Wang, Hanyu
    Wang, Liming
    Li, Hongjia
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 222 - 230
  • [40] CodedReduce: A Fast and Robust Framework for Gradient Aggregation in Distributed Learning
    Reisizadeh, Amirhossein
    Prakash, Saurav
    Pedarsani, Ramtin
    Avestimehr, Amir Salman
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2022, 30 (01) : 148 - 161