KDRSFL: A knowledge distillation resistance transfer framework for defending model inversion attacks in split federated learning

被引:0
|
作者
Chen, Renlong [1 ]
Xia, Hui [1 ]
Wang, Kai [2 ]
Xu, Shuo [1 ]
Zhang, Rui [1 ]
机构
[1] Ocean Univ China, Coll Comp Sci & Technol, Qingdao 266100, Peoples R China
[2] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin, Peoples R China
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2025年 / 166卷
基金
中国国家自然科学基金;
关键词
Split federated learning; Knowledge distillation; Model inversion attacks; Privacy-Preserving Machine Learning; Resistance Transfer; INFORMATION LEAKAGE; PRIVACY; ROBUSTNESS;
D O I
10.1016/j.future.2024.107637
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Split Federated Learning (SFL) enables organizations such as healthcare to collaborate to improve model performance without sharing private data. However, SFL is currently susceptible to model inversion (MI) attacks, which create a serious problem of risk for private data leakage and loss of accuracy. Therefore, this paper proposes an innovative framework called Knowledge Distillation Resistance Transfer for Split Federated Learning (KDRSFL). The KDRSFL framework combines one-shot distillation techniques with adjustment strategies optimized for attackers, aiming to achieve knowledge distillation-based resistance transfer. KDRSFL enhances the classification accuracy of feature extractors and strengthens their resistance to adversarial attacks. First, a teacher model with strong resistance to MI attacks is constructed, and then this capability is transferred to the client models through knowledge distillation. Second, the defense of the client models is further strengthened through attacker-aware training. Finally, the client models achieve effective defense against MI through local training. Detailed experimental validation shows that KDRSFL performs well against MI attacks on the CIFAR100 dataset. KDRSFL achieved a reconstruction mean squared error (MSE) of 0.058 while maintaining a model accuracy of 67.4% for the VGG11 model. KDRSFL represents a 16% improvement in MI attack error rate over ResSFL, with only 0.1% accuracy loss.
引用
收藏
页数:10
相关论文
共 25 条
  • [1] FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks
    Sun, Hanqi
    Zhu, Wanquan
    Sun, Ziyu
    Cao, Mingsheng
    Liu, Wenbin
    ELECTRONICS, 2023, 12 (23)
  • [2] A federated learning framework based on transfer learning and knowledge distillation for targeted advertising
    Su, Caiyu
    Wei, Jinri
    Lei, Yuan
    Li, Jiahui
    PEERJ COMPUTER SCIENCE, 2023, 9
  • [3] Federated Split Learning via Mutual Knowledge Distillation
    Luo, Linjun
    Zhang, Xinglin
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (03): : 2729 - 2741
  • [4] Defending against gradient inversion attacks in federated learning via statistical machine unlearning
    Gao, Kun
    Zhu, Tianqing
    Ye, Dayong
    Zhou, Wanlei
    KNOWLEDGE-BASED SYSTEMS, 2024, 299
  • [5] Heterogeneous Federated Learning Framework for IIoT Based on Selective Knowledge Distillation
    Guo, Sheng
    Chen, Hui
    Liu, Yang
    Yang, Chengyi
    Li, Zengxiang
    Jin, Cheng Hao
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2025, 21 (02) : 1078 - 1089
  • [6] Communication-efficient Federated Learning for UAV Networks with Knowledge Distillation and Transfer Learning
    Li, Yalong
    Wu, Celimuge
    Du, Zhaoyang
    Zhong, Lei
    Yoshinaga, Tsutomu
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5739 - 5744
  • [7] Digital Twin-Assisted Knowledge Distillation Framework for Heterogeneous Federated Learning
    Wang, Xiucheng
    Cheng, Nan
    Ma, Longfei
    Sun, Ruijin
    Chai, Rong
    Lu, Ning
    CHINA COMMUNICATIONS, 2023, 20 (02) : 61 - 78
  • [8] Personalized and privacy-enhanced federated learning framework via knowledge distillation
    Yu, Fangchao
    Wang, Lina
    Zeng, Bo
    Zhao, Kai
    Yu, Rongwei
    NEUROCOMPUTING, 2024, 575
  • [9] Digital Twin-Assisted Knowledge Distillation Framework for Heterogeneous Federated Learning
    Xiucheng Wang
    Nan Cheng
    Longfei Ma
    Ruijin Sun
    Rong Chai
    Ning Lu
    ChinaCommunications, 2023, 20 (02) : 61 - 78
  • [10] Heterogeneous Defect Prediction Based on Federated Transfer Learning via Knowledge Distillation
    Wang, Aili
    Zhang, Yutong
    Yan, Yixin
    IEEE ACCESS, 2021, 9 : 29530 - 29540