Unsupervised perturbation based self-supervised federated adversarial trainingUnsupervised perturbation based self-supervised federated adversarial trainingY. Zhang et al.

被引:0
作者
Yuyue Zhang [1 ]
Hanchen Ye [1 ]
Xiaoli Zhao [1 ]
机构
[1] Shanghai University of Engineering Science,School of Electronic and Electrical Engineering
关键词
Federated learning; Self-supervised learning; Adversarial training; Robustness;
D O I
10.1007/s10489-024-05938-5
中图分类号
学科分类号
摘要
Similar to traditional machine learning, federated learning is susceptible to adversarial attacks. Existing defense methods against federated attacks often rely on extensive labeling during the local training process to enhance model robustness. However, labeling typically requires significant resources. To address the challenges posed by expensive labeling and the robustness issues in federated learning, we propose the Unsupervised Perturbation based Self-Supervised Federated Adversarial Training (UPFAT) framework. Within local clients, we introduce an innovative unsupervised adversarial sample generation method, which adapts the classical self-supervised framework BYOL (Bootstrap Your Own Latent). This method maximizes the distances between embeddings of various transformations of the same input, generating unsupervised adversarial samples aimed at confusing the model. For model communication, we present the Robustness-Enhanced Moving Average (REMA) module, which adaptively utilizes global model updates based on the local model’s robustness.Extensive experiments demonstrate that UPFAT outperforms existing methods by 3∼4%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varvec{3\sim 4\%}$$\end{document}.
引用
收藏
相关论文
empty
未找到相关数据