Fool Attackers by Imperceptible Noise: A Privacy-Preserving Adversarial Representation Mechanism for Collaborative Learning

被引:0
作者
Ruan, Na [1 ]
Chen, Jikun [1 ]
Huang, Tu [1 ]
Sun, Zekun [1 ]
Li, Jie [1 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci, Shanghai 200240, Peoples R China
基金
国家重点研发计划;
关键词
Federated learning; Data models; Training; Task analysis; Noise; Privacy; Data privacy; collaborative learning; adversarial examples; quantification;
D O I
10.1109/TMC.2024.3405548
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The performance of deep learning models highly depends on the amount of training data. It is common practice for today's data holders to merge their datasets and train models collaboratively, which yet poses a threat to data privacy. Different from existing methods, such as secure multi-party computation (MPC) and federated learning (FL), we find representation learning has unique advantages in collaborative learning due to its low privacy budget, wide applicability to tasks and lower communication overhead. However, data representations face the threat of model inversion attacks. In this article, we formally define the collaborative learning scenario, and present ARS (for adversarial representation sharing), a collaborative learning framework wherein users share representations of data to train models, and add imperceptible adversarial noise to data representations against reconstruction or attribute extraction attacks. By theoretical analysis and evaluating ARS in different contexts, we demonstrate that our mechanism is effective against model inversion attacks, and can achieve great utility and low communication complexity while preserving data privacy. Moreover, the ARS framework has wide applicability, which can be easily extended to the vertical data partitioning scenario and utilized in different tasks.
引用
收藏
页码:11839 / 11852
页数:14
相关论文
共 37 条
  • [1] QUOTIENT: Two-Party Secure Neural Network Training and Prediction
    Agrawal, Nitin
    Shamsabadi, Ali Shahin
    Kusner, Matt J.
    Gascon, Adria
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 1231 - 1247
  • [2] Ben-David Shai, 2007, PROC ADV NEURAL INF, P137
  • [3] ALRS: An Adversarial Noise Based Privacy-Preserving Data Sharing Mechanism
    Chen, Jikun
    Deng, Ruoyu
    Chen, Hongbin
    Ruan, Na
    Liu, Yao
    Liu, Chao
    Su, Chunhua
    [J]. INFORMATION SECURITY AND PRIVACY, ACISP 2021, 2021, 13083 : 490 - 509
  • [4] Emerging Trends Word2Vec
    Church, Kenneth Ward
    [J]. NATURAL LANGUAGE ENGINEERING, 2017, 23 (01) : 155 - 162
  • [5] Collins L, 2021, PR MACH LEARN RES, V139
  • [6] Dua D, 2017, UCI machine learning repository
  • [7] Privacy-Preserving Image Features via Adversarial Affine Subspace Embeddings
    Dusmanu, Mihai
    Schoenberger, Johannes L.
    Sinha, Sudipta N.
    Pollefeys, Marc
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 14262 - 14272
  • [8] Erhan D, 2010, J MACH LEARN RES, V11, P625
  • [9] Ferdowsi S, 2020, INT CONF ACOUST SPEE, P2797, DOI [10.1109/ICASSP40776.2020.9054046, 10.1109/icassp40776.2020.9054046]
  • [10] Geiping Jonas, 2020, ADV NEURAL INFORM PR, V33, P16937