ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning

被引:42
作者
Li, Jingtao [1 ]
Rakin, Adnan Siraj [1 ]
Chen, Xing [1 ]
He, Zhezhi [2 ]
Fan, Deliang [1 ]
Chakrabarti, Chaitali [1 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85281 USA
[2] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.00995
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work aims to tackle Model Inversion (MI) attack on Split Federated Learning (SFL). SFL is a recent distributed training scheme where multiple clients send intermediate activations (i. e., feature map), instead of raw data, to a central server. While such a scheme helps reduce the computational load at the client end, it opens itself to reconstruction of raw data from intermediate activation by the server. Existing works on protecting SFL only consider inference and do not handle attacks during training. So we propose ResSFL, a Split Federated Learning Framework that is designed to be MI-resistant during training. It is based on deriving a resistant feature extractor via attacker-aware training, and using this extractor to initialize the client-side model prior to standard SFL training. Such a method helps in reducing the computational complexity due to use of strong inversion model in client-side adversarial training as well as vulnerability of attacks launched in early training epochs. On CIFAR-100 dataset, our proposed framework successfully mitigates MI attack on a VGG-11 model with a high reconstruction Mean-Square-Error of 0.050 compared to 0.005 obtained by the baseline system. The frame-work achieves 67.5% accuracy (only 1 % accuracy drop) with very low computation overhead. Code is released at: https://github.com/zlijingtao/ResSFL.
引用
收藏
页码:10184 / 10192
页数:9
相关论文
共 26 条
[1]   Inverting Visual Representations with Convolutional Networks [J].
Dosovitskiy, Alexey ;
Brox, Thomas .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :4829-4837
[2]   BottleNet: A Deep Learning Architecture for Intelligent Mobile Cloud Computing Services [J].
Eshratifar, Amir Erfan ;
Esmaili, Amirhossein ;
Pedram, Massoud .
2019 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED), 2019,
[3]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[4]  
Goodfellow I. J., 2015, ICLR
[5]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[6]   Distributed learning of deep neural network over multiple agents [J].
Gupta, Otkrist ;
Raskar, Ramesh .
JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2018, 116 :1-8
[7]  
He C, 2020, Advances in Neural Information Processing Systems, V33, P14068
[8]   Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems [J].
He, Zecheng ;
Zhang, Tianwei ;
Lee, Ruby B. .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (12) :9706-9716
[9]   Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack [J].
He, Zhezhi ;
Rakin, Adnan Siraj ;
Fan, Deliang .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :588-597
[10]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001