Fool Attackers by Imperceptible Noise: A Privacy-Preserving Adversarial Representation Mechanism for Collaborative Learning

被引:1
作者
Ruan, Na [1 ]
Chen, Jikun [1 ]
Huang, Tu [1 ]
Sun, Zekun [1 ]
Li, Jie [1 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci, Shanghai 200240, Peoples R China
基金
国家重点研发计划;
关键词
Federated learning; Data models; Training; Task analysis; Noise; Privacy; Data privacy; collaborative learning; adversarial examples; quantification;
D O I
10.1109/TMC.2024.3405548
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The performance of deep learning models highly depends on the amount of training data. It is common practice for today's data holders to merge their datasets and train models collaboratively, which yet poses a threat to data privacy. Different from existing methods, such as secure multi-party computation (MPC) and federated learning (FL), we find representation learning has unique advantages in collaborative learning due to its low privacy budget, wide applicability to tasks and lower communication overhead. However, data representations face the threat of model inversion attacks. In this article, we formally define the collaborative learning scenario, and present ARS (for adversarial representation sharing), a collaborative learning framework wherein users share representations of data to train models, and add imperceptible adversarial noise to data representations against reconstruction or attribute extraction attacks. By theoretical analysis and evaluating ARS in different contexts, we demonstrate that our mechanism is effective against model inversion attacks, and can achieve great utility and low communication complexity while preserving data privacy. Moreover, the ARS framework has wide applicability, which can be easily extended to the vertical data partitioning scenario and utilized in different tasks.
引用
收藏
页码:11839 / 11852
页数:14
相关论文
共 37 条
[31]   Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition [J].
Sharif, Mahmood ;
Bhagavatula, Sruti ;
Reiter, Michael K. ;
Bauer, Lujo .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :1528-1540
[32]   User-Level Privacy-Preserving Federated Learning: Analysis and Performance Optimization [J].
Wei, Kang ;
Li, Jun ;
Ding, Ming ;
Ma, Chuan ;
Su, Hang ;
Zhang, Bo ;
Poor, H. Vincent .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (09) :3388-3401
[33]   The Value of Collaboration in Convex Machine Learning with Differential Privacy [J].
Wu, Nan ;
Farokhi, Farhad ;
Smith, David ;
Kaafar, Mohamed Ali .
2020 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2020), 2020, :304-317
[34]  
Xiao TH, 2020, AAAI CONF ARTIF INTE, V34, P12434
[35]  
Yao A. C., 1986, 27th Annual Symposium on Foundations of Computer Science (Cat. No.86CH2354-9), P162, DOI 10.1109/SFCS.1986.25
[36]   See through Gradients: Image Batch Recovery via GradInversion [J].
Yin, Hongxu ;
Mallya, Arun ;
Vahdat, Arash ;
Alvarez, Jose M. ;
Kautz, Jan ;
Molchanov, Pavlo .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :16332-16341
[37]   Toward Crowdsourced Transportation Mode Identification: A Semisupervised Federated Learning Approach [J].
Zhang, Chenhan ;
Zhu, Yuanshao ;
Markos, Christos ;
Yu, Shui ;
Yu, James J. Q. .
IEEE INTERNET OF THINGS JOURNAL, 2021, 9 (14) :11868-11882