VPPFL: Verifiable Privacy-Preserving Federated Learning in Cloud Environment

被引:0
作者
Wang, Huiyong [1 ,2 ,3 ]
Yang, Tengfei [1 ,4 ]
Ding, Yong [3 ,5 ]
Tang, Shijie [6 ]
Wang, Yujue [7 ]
机构
[1] Guilin Univ Elect Technol, Sch Math & Comp Sci, Guilin 541004, Peoples R China
[2] Guilin Univ Elect Technol, Ctr Appl Math Guangxi, Guilin 541004, Peoples R China
[3] Guilin Univ Elect Technol, Guangxi Key Lab Cryptog & Informat Secur, Guilin 541004, Peoples R China
[4] Guilin Univ Elect Technol, Guangxi Engn Res Ctr Ind Internet Secur & Blockcha, Guilin 541004, Peoples R China
[5] HKCT Inst Higher Educ, Inst Cyberspace Technol, Hong Kong, Peoples R China
[6] Guilin Univ Elect Technol, Sch Elect Engn & Automat, Guilin 541004, Peoples R China
[7] Beihang Univ, Hangzhou Innovat Inst, Hangzhou 310052, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
基金
中国国家自然科学基金;
关键词
Federated learning; Servers; Training; Homomorphic encryption; Protection; Differential privacy; Costs; Biological neural networks; Systems architecture; Symbols; Privacy; Privacy protection; federated learning; verifiable; threshold multi-key homomorphic encryption;
D O I
10.1109/ACCESS.2024.3472467
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a distributed machine learning paradigm, federated learning has attracted wide attention from academia and industry by enabling multiple users to jointly train models without sharing local data. However, federated learning still faces various security and privacy issues. First, even if users only upload gradients, their privacy information may still be leaked. Second, when the aggregation server intentionally returns fabricated results, the model's performance may be degraded. To address the above issues, we propose a verifiable privacy-preserving federated learning scheme VPPFL against semi-malicious cloud server. We use threshold multi-key homomorphic encryption to protect local gradients, and construct a one-way function to enable the users to independently verify the aggregation results. Furthermore, our scheme supports a small portion of users dropout during the training process. Finally, we conduct simulation experiments on the MNIST dataset, demonstrating that VPPFL can correctly and effectively complete training and achieve privacy protection.
引用
收藏
页码:151998 / 152008
页数:11
相关论文
共 28 条
  • [1] Autopilot Design for Vehicle Cornering Through Icy Roads
    Ahiska, Kenan
    Ozgoren, Mustafa Kemal
    Leblebicioglu, Mehmet Kemal
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (03) : 1867 - 1880
  • [2] Bo W., 2023, J. Xi'an Univ. Electron. Sci. Technol., V50, P166
  • [3] Practical Secure Aggregation for Privacy-Preserving Machine Learning
    Bonawitz, Keith
    Ivanov, Vladimir
    Kreuter, Ben
    Marcedone, Antonio
    McMahan, H. Brendan
    Patel, Sarvar
    Ramage, Daniel
    Segal, Aaron
    Seth, Karn
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 1175 - 1191
  • [4] Canxia W, 2022, Netw. Secur. Technol. Appl., P64
  • [5] Dwork C, 2006, LECT NOTES COMPUT SC, V4052, P1
  • [6] Dwork C, 2009, ACM S THEORY COMPUT, P371
  • [7] Privacy-Preserving Deep Learning With Homomorphic Encryption: An Introduction
    Falcetta, Alessandro
    Roveri, Manuel
    [J]. IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2022, 17 (03) : 14 - 25
  • [8] Cloud-Based Outsourcing for Enabling Privacy-Preserving Large-Scale Non-Negative Matrix Factorization
    Fu, Anmin
    Chen, Zhenzhu
    Mu, Yi
    Susilo, Willy
    Sun, Yinxia
    Wu, Jie
    [J]. IEEE TRANSACTIONS ON SERVICES COMPUTING, 2022, 15 (01) : 266 - 278
  • [9] Geyer R. C., 2017, arXiv
  • [10] Ghodsi Z, 2017, ADV NEUR IN, V30