Collaboration in Federated Learning With Differential Privacy: A Stackelberg Game Analysis

被引:9
作者
Huang, Guangjing [1 ]
Wu, Qiong [1 ]
Sun, Peng [2 ]
Ma, Qian [3 ]
Chen, Xu [1 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
[2] Hunan Univ, Coll Comp Sci & Elect Engn, Changsha 410082, Peoples R China
[3] Sun Yat Sen Univ, Sch Intelligent Syst Engn, Shenzhen 518107, Peoples R China
关键词
Federated learning; difference privacy; stackelberg game; discrimination rule; INCENTIVE MECHANISM; NETWORKS; DESIGN;
D O I
10.1109/TPDS.2024.3354713
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
As a privacy-preserving distributed learning paradigm, federated learning (FL) enables multiple client devices to train a shared model without uploading their local data. To further enhance the privacy protection performance of FL, differential privacy (DP) has been successfully incorporated into FL systems to defend against privacy attacks from adversaries. In FL with DP, how to stimulate efficient client collaboration is vital for the FL server due to the privacy-preserving nature of DP and the heterogeneity of various costs (e.g., computation cost) of the participating clients. However, this kind of collaboration remains largely unexplored in existing works. To fill in this gap, we propose a novel analytical framework based on Stackelberg game to model the collaboration behaviors among clients and the server with reward allocation as incentive in FL with DP. We first conduct rigorous convergence analysis of FL with DP and reveal how clients' multidimensional attributes would affect the convergence performance of FL model. Accordingly, we solve the Stackelberg game and derive the collaboration strategies for both clients and the server. We further devise an approximately optimal algorithm for the server to efficiently conduct the joint optimization of the client set selection, the number of global iterations, and the reward payment for the clients. Numerical evaluations using real-world datasets validate our theoretical analysis and corroborate the superior performance of the proposed solution.
引用
收藏
页码:455 / 469
页数:15
相关论文
共 23 条
[1]  
[Anonymous], 2019, J. Sel. Areas Commun., V37, P1205
[2]  
Dekel O., J. Mach. Learn. Res., V13
[3]   Optimal Contract Design for Efficient Federated Learning With Multi-Dimensional Private Information [J].
Ding, Ningning ;
Fang, Zhixuan ;
Huang, Jianwei .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (01) :186-200
[4]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[5]   Trading Data For Learning: Incentive Mechanism For On-Device Federated Learning [J].
Hu, Rui ;
Gong, Yanmin .
2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
[6]   Incentive Mechanism for Reliable Federated Learning: A Joint Optimization Approach to Combining Reputation and Contract Theory [J].
Kang, Jiawen ;
Xiong, Zehui ;
Niyato, Dusit ;
Xie, Shengli ;
Zhang, Junshan .
IEEE INTERNET OF THINGS JOURNAL, 2019, 6 (06) :10700-10714
[7]   Incentive Design for Efficient Federated Learning in Mobile Networks: A Contract Theory Approach [J].
Kang, Jiawen ;
Xiong, Zehui ;
Niyato, Dusit ;
Yu, Han ;
Liang, Ying-Chang ;
Kim, Dong In .
2019 IEEE VTS ASIA PACIFIC WIRELESS COMMUNICATIONS SYMPOSIUM (APWCS 2019), 2019,
[8]  
Krizhevsky A., 2009, Technical report
[9]   An Incentive Mechanism for Federated Learning in Wireless Cellular Networks: An Auction Approach [J].
Le, Tra Huong Thi ;
Tran, Nguyen H. ;
Tun, Yan Kyaw ;
Nguyen, Minh N. H. ;
Pandey, Shashi Raj ;
Han, Zhu ;
Hong, Choong Seon .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (08) :4874-4887
[10]   Efficient Mini-batch Training for Stochastic Optimization [J].
Li, Muu ;
Zhang, Tong ;
Chen, Yuqiang ;
Smola, Alexander J. .
PROCEEDINGS OF THE 20TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'14), 2014, :661-670