MIXNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers

被引:3
作者
Lebrun, Thomas [1 ]
Boutet, Antoine [1 ]
Aalmoes, Jan [1 ]
Baud, Adrien [1 ]
机构
[1] Univ Lyon, INSA Lyon, Inria, CITI, Lyon, France
来源
PROCEEDINGS OF THE TWENTY-THIRD ACM/IFIP INTERNATIONAL MIDDLEWARE CONFERENCE, MIDDLEWARE 2022 | 2022年
关键词
Privacy; Machine Learning; Federated Learning; Inference Attacks;
D O I
10.1145/3528535.3565240
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Machine Learning (ML) has emerged as a core technology to provide learning models to perform complex tasks. Boosted by Machine Learning as a Service (MLaaS), the number of applications relying on ML capabilities is ever increasing. However, ML models are the source of different privacy violations through passive or active attacks from different entities. In this paper, we present MixNN a proxy-based privacy-preserving system for federated learning to protect the privacy of participants against a curious or malicious aggregation server trying to infer sensitive information (i.e., membership and attribute inferences). MixNN receives the model updates from participants and mixes layers between participants before sending the mixed updates to the aggregation server. This mixing strategy drastically reduces privacy leaks without any trade-off with utility. Indeed, mixing the updates of the model has no impact on the result of the aggregation of the updates computed by the server. We report on an extensive evaluation of MixNN using several datasets and neural networks architectures to quantify privacy leakage through membership and attribute inference attacks as well the robustness of the protection. We show that MixNN significantly limits both the membership and attribute inferences compared to a baseline using model compression and noisy gradient (well known to damage the utility) while keeping the same level of utility as classic federated learning.
引用
收藏
页码:135 / 147
页数:13
相关论文
共 50 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Anati I., 2013, P 2 INT WORKSH HARDW
[3]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[4]   Secure Single-Server Aggregation with (Poly)Logarithmic Overhead [J].
Bell, James Henry ;
Bonawitz, Kallista A. ;
Gascon, Adria ;
Lepoint, Tancrede ;
Raykova, Mariana .
CCS '20: PROCEEDINGS OF THE 2020 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2020, :1253-1269
[5]  
Bonawitz K., 2016, arXiv, DOI DOI 10.48550/ARXIV.1611.04482
[6]  
Bonawitz K., 2019, Proc. Mach. Learn. Syst., V1, P374
[7]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[8]   DYSAN: Dynamically Sanitizing Motion Sensor Data Against Sensitive Inferences Through Adversarial Networks [J].
Boutet, Antoine ;
Frindel, Carole ;
Gambs, Sebastien ;
Jourdan, Theo ;
Ngueveu, Rosin Claude .
ASIA CCS'21: PROCEEDINGS OF THE 2021 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, :672-686
[9]   UNTRACEABLE ELECTRONIC MAIL, RETURN ADDRESSES, AND DIGITAL PSEUDONYMS [J].
CHAUM, DL .
COMMUNICATIONS OF THE ACM, 1981, 24 (02) :84-88
[10]   PRESAGE: PRivacy-preserving gEnetic testing via SoftwAre Guard Extension [J].
Chen, Feng ;
Wang, Chenghong ;
Dai, Wenrui ;
Jiang, Xiaoqian ;
Mohammed, Noman ;
Al Aziz, Md Momin ;
Sadat, Md Nazmus ;
Sahinalp, Cenk ;
Lauter, Kristin ;
Wang, Shuang .
BMC MEDICAL GENOMICS, 2017, 10