Byzantine robust aggregation in federated distillation with adversaries

被引:0
|
作者
Li, Wenrui [1 ]
Gu, Hanlin [2 ]
Wan, Sheng [3 ]
Lu, Zhirong [4 ]
Xi, Wei [5 ]
Fan, Lixin [2 ]
Yang, Qiang [3 ]
Chen, Badong [1 ]
机构
[1] Xi An Jiao Tong Univ, Inst Artificial Intelligence & Robot, Xian, Peoples R China
[2] WeBank Ai Lab, Shenzhen, Peoples R China
[3] Hong Kong Univ Sci & Technol, Dept Comp Sci, Hong Kong, Peoples R China
[4] Xian Univ Technol, Sch Elect Engn, Xian, Peoples R China
[5] Xi An Jiao Tong Univ, Dept Comp Sci, Xian, Peoples R China
来源
2024 IEEE 44TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS 2024 | 2024年
基金
中国国家自然科学基金;
关键词
Federated distillation; heterogeneous models; malicious attacks;
D O I
10.1109/ICDCS60910.2024.00086
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning empowers privacy-preserving, multi-party secure model training without the necessity of sharing raw data. In recent years, knowledge distillation has emerged as a promising solution to address the significant challenge of model heterogeneity within federated learning. However, current research often overlooks the potential threats posed by Byzantine attacks, which can significantly compromise the security of federated distillation. Previous work on Byzantine attacks has been primarily focused on manipulating local gradients to compromise global model, lacking attacks on logits in knowledge distillation scenarios. In this paper, we introduce two innovative attacks, shedding light on the inherent risks in federated distillation. The proposed attacks include a topk attack, which perturbs the top k values of logits in each column, and an impersonation attack, which emulates knowledge significantly deviating from the norm. To counter such attacks, we propose a robust aggregation strategy-FedTGD (Federated Top Guard Distillation), designed to ensure robust distillation with heterogeneous models. Specifically, FedTGD incorporates Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and maximum cosine similarity on top-k values of logits to select benign knowledge. Experimental evaluations conducted on FEMNIST and CIFAR100 datasets, considering scenarios for both IID and Non-IID, reveal that top-k attack results in a substantial 27.16% accuracy reduction for FedMD. In contrast, our aggregation method shows a marginal 0.7% accuracy decrease under top-k attacks, outperforming state-of-the-art baselines.
引用
收藏
页码:881 / 890
页数:10
相关论文
共 15 条
  • [1] Towards collaborative fair federated distillation
    Noor, Faiza Anan
    Tabassum, Nawrin
    Hussain, Tahmid
    Rafi, Taki Hasan
    Chae, Dong-Kyu
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 137
  • [2] Recommendation Approach Based on Attentive Federated Distillation
    Chen M.
    Zhang L.
    Ma T.-Y.
    Ruan Jian Xue Bao/Journal of Software, 2021, 32 (12): : 3852 - 3868
  • [3] FedAUXfdp: Differentially Private One-Shot Federated Distillation
    Hoech, Haley
    Rischke, Roman
    Mueller, Karsten
    Samek, Wojciech
    TRUSTWORTHY FEDERATED LEARNING, FL 2022, 2023, 13448 : 100 - 114
  • [4] Federated Distillation Methodology for Label-Based Group Structures
    Yang, Geonhee
    Tae, Hyunchul
    APPLIED SCIENCES-BASEL, 2024, 14 (01):
  • [5] Medical dialogue text generation based on dynamic federated distillation
    Liu Y.
    Lin M.
    Dongnan Daxue Xuebao (Ziran Kexue Ban)/Journal of Southeast University (Natural Science Edition), 2024, 54 (04): : 1030 - 1036
  • [6] Privacy Leakage from Logits Attack and Its Defense in Federated Distillation
    Xiao, Danyang
    Yang, Diying
    Li, Jialun
    Chen, Xu
    Wu, Weigang
    2024 54TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS, DSN 2024, 2024, : 169 - 182
  • [7] Label-Only Membership Inference Attack Against Federated Distillation
    Wang, Xi
    Zhao, Yanchao
    Zhang, Jiale
    Chen, Bing
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT II, 2024, 14488 : 394 - 410
  • [8] Blockfd: blockchain-based federated distillation against poisoning attacks
    Li Y.
    Zhang J.
    Zhu J.
    Li W.
    Neural Computing and Applications, 2024, 36 (21) : 12901 - 12916
  • [9] Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup
    Oh, Seungeun
    Park, Jihong
    Jeong, Eunjeong
    Kim, Hyesung
    Bennis, Mehdi
    Kim, Seong-Lyun
    IEEE COMMUNICATIONS LETTERS, 2020, 24 (10) : 2211 - 2215
  • [10] Ensemble Distillation Based Adaptive Quantization for Supporting Federated Learning in Wireless Networks
    Liu, Yi-Jing
    Feng, Gang
    Niyato, Dusit
    Qin, Shuang
    Zhou, Jianhong
    Li, Xiaoqian
    Xu, Xinyi
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (06) : 4013 - 4027