FAIR-BFL: Flexible and Incentive Redesign for Blockchain-based Federated Learning

被引:5
作者
Xu, Rongxin [1 ]
Pokhrel, Shiva Raj [2 ]
Lan, Qiujun [1 ]
Li, Gang [3 ]
机构
[1] Hunan Univ, Hunan Key Lab Data Sci & Blockchain, Business Sch, Changsha 410082, Hunan, Peoples R China
[2] Deakin Univ, Sch IT, Geelong, Vic 3216, Australia
[3] Deakin Univ, Ctr Cyber Secur Res & Innovat, Geelong, Vic 3216, Australia
来源
51ST INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2022 | 2022年
关键词
Federated Learning; Blockchain; Incentive; Security and Privacy;
D O I
10.1145/3545008.3545040
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Vanilla Federated learning (FL) relies on the centralized global aggregation mechanism and assumes that all clients are honest. This makes it a challenge for FL to alleviate the single point of failure and dishonest clients. These impending challenges in the design philosophy of FL call for blockchain-based federated learning (BFL) due to the benefits of coupling FL and blockchain (e.g., democracy, incentive, and immutability). However, one problem in vanilla BFL is that its capabilities do not follow adopters' needs in a dynamic fashion. Besides, vanilla BFL relies on unverifiable clients' self-reported contributions like data size because checking clients' raw data is not allowed in FL for privacy concerns. We design and evaluate a novel BFL framework, and resolve the identified challenges in vanilla BFL with greater flexibility and incentive mechanism called FAIR-BFL. In contrast to existingworks, FAIR-BFL offers unprecedented flexibility via the modular design, allowing adopters to adjust its capabilities following business demands in a dynamic fashion. Our design accounts for BFL's ability to quantify each client's contribution to the global learning process. Such quantification provides a rational metric for distributing the rewards among federated clients and helps discover malicious participants that may poison the global model.
引用
收藏
页数:11
相关论文
共 26 条
[1]   Poster: A Reliable and Accountable Privacy-Preserving Federated Learning Framework using the Blockchain [J].
Awan, Sana ;
Li, Fengjun ;
Luo, Bo ;
Liu, Mei .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :2561-2563
[2]   FLChain: A Blockchain for Auditable Federated Learning with Trust and Incentive [J].
Bao, Xianglin ;
Su, Cheng ;
Xiong, Yan ;
Huang, Wenchao ;
Hu, Yifei .
5TH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING AND COMMUNICATIONS (BIGCOM 2019), 2019, :151-159
[3]   When Internet of Things Meets Blockchain: Challenges in Distributed Consensus [J].
Cao, Bin ;
Li, Yixin ;
Zhang, Lei ;
Zhang, Long ;
Mumtaz, Shahid ;
Zhou, Zhenyu ;
Peng, Mugen .
IEEE NETWORK, 2019, 33 (06) :133-139
[4]  
Chen JM, 2017, Arxiv, DOI arXiv:1604.00981
[5]  
Cho YJ, 2022, PR MACH LEARN RES, V151
[6]   Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures [J].
Fredrikson, Matt ;
Jha, Somesh ;
Ristenpart, Thomas .
CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, :1322-1333
[7]   Blockchained On-Device Federated Learning [J].
Kim, Hyesung ;
Park, Jihong ;
Bennis, Mehdi ;
Kim, Seong-Lyun .
IEEE COMMUNICATIONS LETTERS, 2020, 24 (06) :1279-1283
[8]  
Konečny J, 2016, Arxiv, DOI [arXiv:1610.02527, DOI 10.48550/ARXIV.1610.02527]
[9]  
Li T., 2020, P MACH LEARN SYST, V2, P429
[10]  
Li Xiang, 2020, ICLR, DOI DOI 10.1109/MLBDBI54094.2021.00040