Similar to traditional machine learning, federated learning is susceptible to adversarial attacks. Existing defense methods against federated attacks often rely on extensive labeling during the local training process to enhance model robustness. However, labeling typically requires significant resources. To address the challenges posed by expensive labeling and the robustness issues in federated learning, we propose the Unsupervised Perturbation based Self-Supervised Federated Adversarial Training (UPFAT) framework. Within local clients, we introduce an innovative unsupervised adversarial sample generation method, which adapts the classical self-supervised framework BYOL (Bootstrap Your Own Latent). This method maximizes the distances between embeddings of various transformations of the same input, generating unsupervised adversarial samples aimed at confusing the model. For model communication, we present the Robustness-Enhanced Moving Average (REMA) module, which adaptively utilizes global model updates based on the local model’s robustness.Extensive experiments demonstrate that UPFAT outperforms existing methods by 3∼4%\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\varvec{3\sim 4\%}$$\end{document}.