KDRSFL: A knowledge distillation resistance transfer framework for defending model inversion attacks in split federated learning

被引:0
|
作者
Chen, Renlong [1 ]
Xia, Hui [1 ]
Wang, Kai [2 ]
Xu, Shuo [1 ]
Zhang, Rui [1 ]
机构
[1] Ocean Univ China, Coll Comp Sci & Technol, Qingdao 266100, Peoples R China
[2] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin, Peoples R China
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2025年 / 166卷
基金
中国国家自然科学基金;
关键词
Split federated learning; Knowledge distillation; Model inversion attacks; Privacy-Preserving Machine Learning; Resistance Transfer; INFORMATION LEAKAGE; PRIVACY; ROBUSTNESS;
D O I
10.1016/j.future.2024.107637
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Split Federated Learning (SFL) enables organizations such as healthcare to collaborate to improve model performance without sharing private data. However, SFL is currently susceptible to model inversion (MI) attacks, which create a serious problem of risk for private data leakage and loss of accuracy. Therefore, this paper proposes an innovative framework called Knowledge Distillation Resistance Transfer for Split Federated Learning (KDRSFL). The KDRSFL framework combines one-shot distillation techniques with adjustment strategies optimized for attackers, aiming to achieve knowledge distillation-based resistance transfer. KDRSFL enhances the classification accuracy of feature extractors and strengthens their resistance to adversarial attacks. First, a teacher model with strong resistance to MI attacks is constructed, and then this capability is transferred to the client models through knowledge distillation. Second, the defense of the client models is further strengthened through attacker-aware training. Finally, the client models achieve effective defense against MI through local training. Detailed experimental validation shows that KDRSFL performs well against MI attacks on the CIFAR100 dataset. KDRSFL achieved a reconstruction mean squared error (MSE) of 0.058 while maintaining a model accuracy of 67.4% for the VGG11 model. KDRSFL represents a 16% improvement in MI attack error rate over ResSFL, with only 0.1% accuracy loss.
引用
收藏
页数:10
相关论文
共 25 条
  • [21] Lightweight Knowledge Distillation-Based Transfer Learning Framework for Rolling Bearing Fault Diagnosis
    Lu, Ruijia
    Liu, Shuzhi
    Gong, Zisu
    Xu, Chengcheng
    Ma, Zonghe
    Zhong, Yiqi
    Li, Baojian
    SENSORS, 2024, 24 (06)
  • [22] RVE-PFL: Robust Variational Encoder-Based Personalized Federated Learning Against Model Inversion Attacks
    Issa, Wael
    Moustafa, Nour
    Turnbull, Benjamin
    Choo, Kim-Kwang Raymond
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3772 - 3787
  • [23] One-Shot Federated Learning on Medical Data Using Knowledge Distillation with Image Synthesis and Client Model Adaptation
    Kang, Myeongkyun
    Chikontwe, Philip
    Kim, Soopil
    Jin, Kyong Hwan
    Adeli, Ehsan
    Pohl, Kilian M.
    Park, Sang Hyun
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT II, 2023, 14221 : 521 - 531
  • [24] Block Encryption LAyer (BELA): Zero-Trust Defense Against Model Inversion Attacks for Federated Learning in 5G/6G Systems
    Khowaja, Sunder A.
    Khuwaja, Parus
    Dev, Kapal
    Singh, Keshav
    Li, Xingwang
    Bartzoudis, Nikolaos
    Comsa, Ciprian R.
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2025, 6 : 807 - 819
  • [25] FKD-Med: Privacy-Aware, Communication-Optimized Medical Image Segmentation via Federated Learning and Model Lightweighting Through Knowledge Distillation
    Sun, Guanqun
    Shu, Han
    Shao, Feihe
    Racharak, Teeradaj
    Kong, Weikun
    Pan, Yizhi
    Dong, Jingjing
    Wang, Shuang
    Nguyen, Le-Minh
    Xin, Junyi
    IEEE ACCESS, 2024, 12 : 33687 - 33704