A Sensitivity-aware and Block-wise Pruning Method for Privacy-preserving Federated Learning

被引:0
|
作者
Niu, Ben [1 ]
Wang, Xindi [1 ,2 ]
Zhang, Likun [1 ,2 ]
Guo, Shoukun [1 ]
Cao, Jin [3 ]
Li, Fenghua [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
[3] Xidian Univ, Sch Cyber Engn, State Key Lab Integrated Serv Networks, Xian, Peoples R China
来源
IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM | 2023年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
D O I
10.1109/GLOBECOM54140.2023.10437766
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning (FL) is a distributed learning framework that can reduce privacy risks by not directly sharing private data. However, recent works have shown that the adversary can launch data reconstruction attacks utilizing the gradients or model updates shared by clients. Existing defenses either fail to provide sufficient privacy guarantee or incur significant drop in model accuracy. To achieve a good privacy-utility tradeoff, we propose a novel block-wise pruning method. It mitigates the privacy leakage by locating and quantifying the privacy risk of a model at a finer-grained level. Specifically, we define the sensitivity metric to calculate the gradient sensitivity w.r.t the input to quantify privacy leakage risk of each block. Then we divide the entire model into same-sized blocks and sort them based on the sensitivity metrics. We select part of the blocks with least sensitivity values as the pruned model to be communicated during the client-server interaction. To evaluate the effectiveness and efficiency of our defense, we conduct experiments on MNIST and CIFAR10 for defending against the DLG attack and GS attack. Results demonstrate that our proposed method can significantly mitigate gradient leakage against both DLG attack and GS attack with as much as 20x mean squared errors between the reconstructed data and the raw data with only modest accuracy drop, compared with baseline defenses. Meanwhile, the communication cost between the server and clients is also reduced.
引用
收藏
页码:4259 / 4264
页数:6
相关论文
共 50 条
  • [21] A Syntactic Approach for Privacy-Preserving Federated Learning
    Choudhury, Olivia
    Gkoulalas-Divanis, Aris
    Salonidis, Theodoros
    Sylla, Issa
    Park, Yoonyoung
    Hsu, Grace
    Das, Amar
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1762 - 1769
  • [22] PPFLV: privacy-preserving federated learning with verifiability
    Zhou, Qun
    Shen, Wenting
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (09): : 12727 - 12743
  • [23] Contribution Measurement in Privacy-Preserving Federated Learning
    Hsu, Ruei-hau
    Yu, Yi-an
    Su, Hsuan-cheng
    JOURNAL OF INFORMATION SCIENCE AND ENGINEERING, 2024, 40 (06) : 1173 - 1196
  • [24] Privacy-Preserving Federated Learning in Fog Computing
    Zhou, Chunyi
    Fu, Anmin
    Yu, Shui
    Yang, Wei
    Wang, Huaqun
    Zhang, Yuqing
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (11): : 10782 - 10793
  • [25] Federated Learning for Privacy-Preserving Speaker Recognition
    Woubie, Abraham
    Backstrom, Tom
    IEEE ACCESS, 2021, 9 : 149477 - 149485
  • [26] Privacy-Preserving Decentralized Aggregation for Federated Learning
    Jeon, Beomyeol
    Ferdous, S. M.
    Rahmant, Muntasir Raihan
    Walid, Anwar
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [27] Privacy-Preserving Federated Learning via Disentanglement
    Zhou, Wenjie
    Li, Piji
    Han, Zhaoyang
    Lu, Xiaozhen
    Li, Juan
    Ren, Zhaochun
    Liu, Zhe
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3606 - 3615
  • [28] GAIN: Decentralized Privacy-Preserving Federated Learning
    Jiang, Changsong
    Xu, Chunxiang
    Cao, Chenchen
    Chen, Kefei
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2023, 78
  • [29] Privacy-preserving Decentralized Federated Deep Learning
    Zhu, Xudong
    Li, Hui
    PROCEEDINGS OF ACM TURING AWARD CELEBRATION CONFERENCE, ACM TURC 2021, 2021, : 33 - 38
  • [30] Privacy-Preserving and Reliable Distributed Federated Learning
    Dong, Yipeng
    Zhang, Lei
    Xu, Lin
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT I, 2024, 14487 : 130 - 149