Knowledge Distillation Assisted Robust Federated Learning: Towards Edge Intelligence

被引:3
作者
Qiao, Yu [1 ]
Adhikary, Apurba [2 ]
Kim, Ki Tae [2 ]
Zhang, Chaoning [1 ]
Hong, Choong Seon [2 ]
机构
[1] Kyung Hee Univ, Dept Artificial Intelligence, Yongin 17104, South Korea
[2] Kyung Hee Univ, Dept Comp Sci & Engn, Yongin 17104, South Korea
来源
ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS | 2024年
基金
新加坡国家研究基金会;
关键词
Federated learning; adversarial attack; knowledge distillation; non-IID; edge intelligence;
D O I
10.1109/ICC51166.2024.10622956
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) makes it possible to advance towards edge intelligence by enabling collaborative and privacy-preserving model training across distributed edge devices. One of the main challenges in FL is non-IID (not Independent and Identically Distributed) nature of data distribution across edge devices, which results in inconsistent update directions of local and global models, thus hindering model convergence. Moreover, recent studies have shown that FL models can significantly degrade performance under adversarial attacks, which further poses challenges for deployment at edge sides. In this work, we attempt to improve the robustness of FL model under adversarial attacks in non-IID settings by sharing knowledge between a central server and edge devices via knowledge distillation. Specifically, we propose a new knowledge distillation-based federated adversarial training (FAT) framework, termed FedAdv (Federated Adversarial), which involves an edge server collecting global prototypes by aggregating local prototypes obtained from participating devices after adversarial training (AT). These global prototypes are subsequently distributed to the edge devices for regularization. This regularization mechanism aims to encourage each device to align its local representation with the corresponding global prototype. By doing so, it helps prevent significant deviations of local model updates from the global model. Experimental results on MNIST and Fashion-MNIST show that our strategy yields comparable or superior performance gains in both natural and robust accuracy compared to several baselines.
引用
收藏
页码:843 / 848
页数:6
相关论文
共 23 条
[1]  
Adhikary A., 2023, NOMS 2023 2023 IEEE, P1
[2]   Integrated Sensing, Localization, and Communication in Holographic MIMO-Enabled Wireless Network: A Deep Learning Approach [J].
Adhikary, Apurba ;
Munir, Md. Shirajum ;
Raha, Avi Deb ;
Qiao, Yu ;
Han, Zhu ;
Hong, Choong Seon .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (01) :789-809
[3]   An Artificial Intelligence Framework for Holographic Beamforming: Coexistence of Holographic MIMO and Intelligent Omni-Surface [J].
Adhikary, Apurba ;
Munir, Md. Shirajum ;
Raha, Avi Deb ;
Qiao, Yu ;
Hong, Sang Hoon ;
Huh, Eui-Nam ;
Hong, Choong Seon .
2023 INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN, 2023, :19-24
[4]  
Chen C, 2022, Arxiv, DOI [arXiv:2205.14926, DOI 10.48550/ARXIV.2205.14926]
[5]   Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence [J].
Deng, Shuiguang ;
Zhao, Hailiang ;
Fang, Weijia ;
Yin, Jianwei ;
Dustdar, Schahram ;
Zomaya, Albert Y. .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (08) :7457-7469
[6]  
Ghasempour A., 2023, PROC IEEE KANSAS POW, P1
[7]  
Goodfellow I.J., 2014, 3 INT C LEARNING REP
[8]  
Hong JY, 2022, Arxiv, DOI [arXiv:2106.10196, 10.1609/aaai.v37i7.25955]
[9]  
Kurakin A., 2017, ARTIF INTELL SAF SEC
[10]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324