Survey on model inversion attack and defense in federated learning

被引:0
作者
Wang D. [1 ]
Qin Q. [1 ]
Guo K. [1 ]
Liu R. [1 ]
Yan W. [1 ]
Ren Y. [1 ]
Luo Q. [2 ]
Shen Y. [3 ]
机构
[1] School of Cyberspace Security, Hangzhou Dianzi University, Hangzhou
[2] Shandong Inspur Science Research Institute Co., Ltd, Jinan
[3] Shandong Blockchain Research Institute, Jinan
来源
Tongxin Xuebao/Journal on Communications | 2023年 / 44卷 / 11期
关键词
federated learning; model inversion attack; privacy security;
D O I
10.11959/j.issn.1000-436x.2023209
中图分类号
学科分类号
摘要
As a distributed machine learning technology, federated learning can solve the problem of data islands. However, because machine learning models will unconsciously remember training data, model parameters and global models uploaded by participants will suffer various privacy attacks. A systematic summary of existing attack methods was conducted for model inversion attacks in privacy attacks. Firstly, the theoretical framework of model inversion attack was summarized and analyzed in detail. Then, existing attack methods from the perspective of threat models were summarized, analyzed and compared. Then, the defense strategies of different technology types were summarized and compared. Finally, the commonly used evaluation criteria and datasets were summarized for inversion attack of existing models, and the main challenges and future research directions were summarized for inversion attack of models. © 2023 Editorial Board of Journal on Communications. All rights reserved.
引用
收藏
页码:94 / 109
页数:15
相关论文
共 69 条
[1]  
HE K M, ZHANG X Y, REN S Q, Et al., Deep residual learning for image recognition, Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, (2016)
[2]  
WANG S, KANG B, MA J L, Et al., A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19), European Radiology, 31, 8, pp. 6096-6104, (2021)
[3]  
YANG Q., AI and data privacy protection: the way to federated learning, Journal of Information Security Research, 5, 11, pp. 961-965, (2019)
[4]  
MCMAHAN H B, MOORE E, RAMAGE D, Et al., Communication-efficient learning of deep networks from decentralized data, (2016)
[5]  
SONG C Z, RISTENPART T, SHMATIKOV V., Machine learning models that remember too much, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 587-601, (2017)
[6]  
NIU J, MA X J, CHEN Y, Et al., A survey on membership inference attacks and defenses in Machine Learning, Journal of Cyber Security, 7, 6, pp. 1-30, (2022)
[7]  
SUN L C, QIAN J W, CHEN X., LDP-FL: practical private aggregation in federated learning with local differential privacy, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, pp. 1571-1578, (2021)
[8]  
PAPERNOT N, ABADI M, ERLINGSSON U, Et al., Semi-supervised knowledge transfer for deep learning from private training data, (2016)
[9]  
TRAMER F, ZHANG F, JUELS A, Et al., Stealing machine learning models via prediction APIs, Proceedings of the 25th USENIX Conference on Security Symposium, pp. 601-618, (2016)
[10]  
SHOKRI R, STRONATI M, SONG C Z, Et al., Membership inference attacks against machine learning models, Proceedings of 2017 IEEE Symposium on Security and Privacy (SP), pp. 3-18, (2017)