FLPM: A property modification scheme for data protection in federated learning

被引:4
作者
Xu, Shuo [1 ]
Xia, Hui [1 ]
Liu, Peishun [1 ]
Zhang, Rui [1 ]
Chi, Hao [1 ]
Gao, Wei [2 ]
机构
[1] Ocean Univ China, Coll Comp Sci & Technol, 1299 Sansha Rd, Qingdao 266000, Shandong, Peoples R China
[2] Ocean Univ China, Coll Ocean Technol, 1299 Sansha Rd, Qingdao 266000, Shandong, Peoples R China
来源
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE | 2024年 / 154卷
关键词
Federated learning; Variational autoencoder; Privacy protection; Property inference attack; Data poisoning attack;
D O I
10.1016/j.future.2023.12.030
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is a critical technology for implementing time -critical computing systems in the Internet of Things (IoT). It allows for continuous updates to machine learning (ML) models across IoT devices. However, the vulnerability of ML models and the complexity of IoT pose significant threats to device data security and privacy, affecting the robustness of time -critical computing systems constructed through FL. Recent research on FL data protection has made progress, but challenges remain in balancing privacy protection with model availability. For example, cryptography -based defense schemes increase time overhead in time -critical computing systems, while differential privacy negatively impacts system performance. This paper proposes the FL properties modification scheme (FLPM) for data preprocessing to resist property inference attacks and data poisoning attacks. FLPM modifies training data properties using algorithms for property separation, selection, and control based on continuous latent variables. While this sacrifices a small amount of classification accuracy, it significantly improves data protection capabilities. Detailed experimental results demonstrate that FLPM successfully separates and controls image property vectors. In the FL classification task, the property modification data achieve a precision of 94.44%. This scheme effectively prevents property inference attacks and data poisoning attacks. FLPM can reduce the AUC score for property inference attacks from 0.94 to 0.56 and reduce the success rate of data poisoning attacks to 5.13%, 7.07%, and 4.60%.
引用
收藏
页码:151 / 159
页数:9
相关论文
共 40 条
  • [1] Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
  • [2] Collective Data-Sanitization for Preventing Sensitive Information Inference Attacks in Social Networks
    Cai, Zhipeng
    He, Zaobo
    Guan, Xin
    Li, Yingshu
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2018, 15 (04) : 577 - 590
  • [3] Grad-CAM plus plus : Generalized Gradient-based Visual Explanations for Deep Convolutional Networks
    Chattopadhay, Aditya
    Sarkar, Anirban
    Howlader, Prantik
    Balasubramanian, Vineeth N.
    [J]. 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 839 - 847
  • [4] Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
  • [5] Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations
    Ganju, Karan
    Wang, Qi
    Yang, Wei
    Gunter, Carl A.
    Borisov, Nikita
    [J]. PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, : 619 - 633
  • [6] Property Inference for Deep Neural Networks
    Gopinath, Divya
    Converse, Hayes
    Pasareanu, Corina S.
    Taly, Ankur
    [J]. 34TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2019), 2019, : 809 - 821
  • [7] Gu TY, 2019, Arxiv, DOI [arXiv:1708.06733, DOI 10.48550/ARXIV.1708.06733, 10.48550/arXiv.1708.06733]
  • [8] He CY, 2020, Arxiv, DOI arXiv:2007.13518
  • [9] Jayaraman B, 2019, Arxiv, DOI [arXiv:1902.08874, 10.48550/ARXIV.1902.08874, DOI 10.48550/ARXIV.1902.08874]
  • [10] Advances and Open Problems in Federated Learning
    Kairouz, Peter
    McMahan, H. Brendan
    Avent, Brendan
    Bellet, Aurelien
    Bennis, Mehdi
    Bhagoji, Arjun Nitin
    Bonawitz, Kallista
    Charles, Zachary
    Cormode, Graham
    Cummings, Rachel
    D'Oliveira, Rafael G. L.
    Eichner, Hubert
    El Rouayheb, Salim
    Evans, David
    Gardner, Josh
    Garrett, Zachary
    Gascon, Adria
    Ghazi, Badih
    Gibbons, Phillip B.
    Gruteser, Marco
    Harchaoui, Zaid
    He, Chaoyang
    He, Lie
    Huo, Zhouyuan
    Hutchinson, Ben
    Hsu, Justin
    Jaggi, Martin
    Javidi, Tara
    Joshi, Gauri
    Khodak, Mikhail
    Konecny, Jakub
    Korolova, Aleksandra
    Koushanfar, Farinaz
    Koyejo, Sanmi
    Lepoint, Tancrede
    Liu, Yang
    Mittal, Prateek
    Mohri, Mehryar
    Nock, Richard
    Ozgur, Ayfer
    Pagh, Rasmus
    Qi, Hang
    Ramage, Daniel
    Raskar, Ramesh
    Raykova, Mariana
    Song, Dawn
    Song, Weikang
    Stich, Sebastian U.
    Sun, Ziteng
    Suresh, Ananda Theertha
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2021, 14 (1-2): : 1 - 210