CSRA: Robust Incentive Mechanism Design for Differentially Private Federated Learning

被引:2
|
作者
Yang, Yunchao [1 ,2 ]
Hu, Miao [1 ,2 ]
Zhou, Yipeng [3 ]
Liu, Xuezheng [1 ,2 ]
Wu, Di [1 ,2 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Guangdong, Peoples R China
[2] Guangdong Key Lab Big Data Anal & Proc, Guangzhou 510006, Peoples R China
[3] Macquarie Univ, Fac Sci & Engn, Dept Comp, Sydney, NSW 2112, Australia
基金
中国国家自然科学基金;
关键词
Federated learning; incentive mechanism; dishonest behavior; differential privacy;
D O I
10.1109/TIFS.2023.3329441
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The differentially private federated learning (DPFL) paradigm emerges to firmly preserve data privacy from two perspectives. First, decentralized clients merely exchange model updates rather than raw data with a parameter server (PS) over multiple communication rounds for model training. Secondly, model updates to be exposed to the PS will be distorted by clients with differentially private (DP) noises. To incentivize clients to participate in DPFL, various incentive mechanisms have been proposed by existing works which reward participating clients based on their data quality and DP noise scales assuming that all clients are honest and genuinely report their DP noise scales. However, the PS cannot directly measure or observe DP noise scales leaving the vulnerability that clients can boost their rewards and lower DPFL utility by dishonestly reporting their DP noise scales. Through a quantitative study, we validate the adverse influence of dishonest clients in DPFL. To overcome this deficiency, we propose a robust incentive mechanism called client selection with reverse auction (CSRA) for DPFL. We prove that CSRA satisfies the properties of truthfulness, individual rationality, budget feasibility and computational efficiency. Besides, CSRA can prevent dishonest clients with two steps in each communication round. First, CSRA compares the variance of exposed model updates and claimed DP noise scale for each individual to identify suspicious clients. Second, suspicious clients will be further clustered based on their model updates to finally identify dishonest clients. Once dishonest clients are identified, CSRA will not only remove them from the current round but also lower their probability of being selected in subsequent rounds. Extensive experimental results demonstrate that CSRA can provide robust incentive against dishonest clients in DPFL and significantly outperform other baselines on three real public datasets.
引用
收藏
页码:892 / 906
页数:15
相关论文
共 50 条
  • [21] A Hierarchical Incentive Mechanism for Federated Learning
    Huang, Jiwei
    Ma, Bowen
    Wu, Yuan
    Chen, Ying
    Shen, Xuemin
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 12731 - 12747
  • [22] Incentive Design and Differential Privacy Based Federated Learning: A Mechanism Design Perspective
    Kim, Sungwook
    IEEE ACCESS, 2020, 8 : 187317 - 187325
  • [23] Incentive Mechanism Design for Unbiased Federated Learning with Randomized Client Participation
    Luo, Bing
    Feng, Yutong
    Wang, Shiqiang
    Huang, Jianwei
    Tassiulas, Leandros
    2023 IEEE 43RD INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS, 2023, : 545 - 555
  • [24] Compression Boosts Differentially Private Federated Learning
    Kerkouche, Raouf
    Acs, Gergely
    Castelluccia, Claude
    Geneves, Pierre
    2021 IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2021), 2021, : 304 - 318
  • [25] Differentially Private Federated Learning on Heterogeneous Data
    Noble, Maxence
    Bellet, Aurelien
    Dieuleveut, Aymeric
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [26] Differentially private federated learning with Laplacian smoothing
    Liang, Zhicong
    Wang, Bao
    Gu, Quanquan
    Osher, Stanley
    Yao, Yuan
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2024, 72
  • [27] Differentially Private Federated Learning with Drift Control
    Chang, Wei-Ting
    Seif, Mohamed
    Tandon, Ravi
    2022 56TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2022, : 240 - 245
  • [28] Differentially Private Federated Temporal Difference Learning
    Zeng, Yiming
    Lin, Yixuan
    Yang, Yuanyuan
    Liu, Ji
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (11) : 2714 - 2726
  • [29] Towards the Robustness of Differentially Private Federated Learning
    Qi, Tao
    Wang, Huili
    Huang, Yongfeng
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 18, 2024, : 19911 - 19919
  • [30] Hierarchical Incentive Mechanism Design for Federated Machine Learning in Mobile Networks
    Lim, Wei Yang Bryan
    Xiong, Zehui
    Miao, Chunyan
    Niyato, Dusit
    Yang, Qiang
    Leung, Cyril
    Poor, H. Vincent
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (10) : 9575 - 9588