One Teacher is Enough: A Server-Clueless Federated Learning With Knowledge Distillation

被引:0
作者
Ning, Wanyi [1 ,2 ]
Qi, Qi [1 ,2 ]
Wang, Jingyu [1 ,2 ]
Zhu, Mengde [1 ,2 ]
Li, Shaolong [1 ,2 ]
Yang, Guang [3 ]
Liao, Jianxin [1 ,2 ]
机构
[1] Beijing Univ Posts & Telecommun, Sate Key Lab Networking & Switching Technol, Beijing 100876, Peoples R China
[2] E Byte COM, Beijing 100191, Peoples R China
[3] Alibaba DAMO Acad, XG Lab, Beijing 100102, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; Privacy; knowledge distillation; privacy protection;
D O I
10.1109/TSC.2024.3414372
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning-based services offer intelligent solutions with powerful models. To enhance model robustness, Federated Learning (FL) emerges as a promising collaborative learning paradigm, which iteratively trains a global model through parameter exchange among multiple clients based on their local data. Generally, the local data are heterogeneous, which slows down convergence. Knowledge distillation is an effective technique against data heterogeneity while existing works distill the ensemble knowledge from local models, ignoring the natural global knowledge from the aggregated model. This places limitations on their algorithms, such as the need for proxy data or the necessary exposure of local models to the server, which is prohibited in most privacy-preserving FL with a clueless server. In this work, we propose FedDGT, a novel knowledge distillation method for industrial server-clueless FL. FedDGT regards the aggregated model as the only one teacher to impart its global knowledge into a generator and then regularizes the drifted local models through the generator, overcoming previous limitations and providing better privacy and scalability support. Extensive experiments demonstrate that FedDGT can achieve highly-competitive model performance while greatly reducing the communication rounds in a server-clueless scenario.
引用
收藏
页码:2704 / 2718
页数:15
相关论文
共 48 条
  • [1] Coalitional Federated Learning: Improving Communication and Training on Non-IID Data With Selfish Clients
    Arisdakessian, Sarhad
    Wahab, Omar Abdel
    Mourad, Azzam
    Otrok, Hadi
    [J]. IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (04) : 2462 - 2476
  • [2] Ba LJ, 2014, ADV NEUR IN, V27
  • [3] Blake C., 2015, UCI repository of machine learning databases, department of information and computer science
  • [4] Practical Secure Aggregation for Privacy-Preserving Machine Learning
    Bonawitz, Keith
    Ivanov, Vladimir
    Kreuter, Ben
    Marcedone, Antonio
    McMahan, H. Brendan
    Patel, Sarvar
    Ramage, Daniel
    Segal, Aaron
    Seth, Karn
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 1175 - 1191
  • [5] Chen H., 2021, P 9 INT C LEARN REPR
  • [6] SecureBoost: A Lossless Federated Learning Framework
    Cheng, Kewei
    Fan, Tao
    Jin, Yilun
    Liu, Yang
    Chen, Tianjian
    Papadopoulos, Dimitrios
    Yang, Qiang
    [J]. IEEE INTELLIGENT SYSTEMS, 2021, 36 (06) : 87 - 98
  • [7] Coates Adam, 2011, NIPS WORKSHOP DEEP L
  • [8] Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
  • [9] Collobert R, 2002, ADV NEUR IN, V14, P633
  • [10] BLINDFL: Vertical Federated Machine Learning without Peeking into Your Data
    Fu, Fangcheng
    Xue, Huanran
    Cheng, Yong
    Tao, Yangyu
    Cui, Bin
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 1316 - 1330