A Fast Blockchain-Based Federated Learning Framework With Compressed Communications

被引:22
作者
Cui, Laizhong [1 ]
Su, Xiaoxin [1 ]
Zhou, Yipeng [2 ]
机构
[1] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
[2] Macquarie Univ, Sch Comp, FSE, Macquarie Pk, NSW 2113, Australia
基金
中国国家自然科学基金;
关键词
Federated learning; blockchain; compression; convergence; OPTIMIZATION; DESIGN;
D O I
10.1109/JSAC.2022.3213345
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recently, blockchain-based federated learning (BFL) has attracted intensive research attention due to that the training process is auditable and the architecture is serverless avoiding the single point failure of the parameter server in vanilla federated learning (VFL). Nevertheless, BFL tremendously escalates the communication traffic volume because all local model updates (i.e., changes of model parameters) obtained by BFL clients will be transmitted to all miners for verification and to all clients for aggregation. In contrast, the parameter server and clients in VFL only retain aggregated model updates. Consequently, the huge communication traffic in BFL win inevitably impair the training efficiency and hinder the deployment of BFL in reality. To improve the practicality of BFL, we are among the first to propose a fast blockchain-based communication-efficient federated learning framework by compressing communications in BFL, called BCFL. Meanwhile, we derive the convergence rate of BCFL with non-convex loss. To maximize the final model accuracy, we further formulate the problem to minimize the training loss of the convergence rate subject to a limited training time with respect to the compression rate and the block generation rate, which is a bi-convex optimization problem and can be efficiently solved. To the end, to demonstrate the efficiency of BCFL, we carry out extensive experiments with standard CIFAR-10 and FEMNIST datasets. Our experimental results not only verify the correctness of our analysis, but also manifest that BCFL can remarkably reduce the communication traffic by 95-98% or shorten the training time by 90-95% compared with BFL.
引用
收藏
页码:3358 / 3372
页数:15
相关论文
共 48 条
[1]   DC2: Delay-aware Compression Control for Distributed Machine Learning [J].
Abdelmoniem, Ahmed M. ;
Canini, Marco .
IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
[2]  
Aji A.F., 2017, Conference on Empirical Methods in Natural Language Processing, P440, DOI DOI 10.18653/V1/D17-1045
[3]  
Alistarh D, 2017, ADV NEUR IN, V30
[4]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[5]   FLChain: A Blockchain for Auditable Federated Learning with Trust and Incentive [J].
Bao, Xianglin ;
Su, Cheng ;
Xiong, Yan ;
Huang, Wenchao ;
Hu, Yifei .
5TH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING AND COMMUNICATIONS (BIGCOM 2019), 2019, :151-159
[6]   Qsparse-Local-SGD: Distributed SGD With Quantization, Sparsification, and Local Computations [J].
Basu, Debraj ;
Data, Deepesh ;
Karakus, Can ;
Diggavi, Suhas N. .
IEEE JOURNAL ON SELECTED AREAS IN INFORMATION THEORY, 2020, 1 (01) :217-226
[7]  
Caldas S, 2018, arXiv
[8]   A Hierarchical Blockchain-Enabled Federated Learning Algorithm for Knowledge Sharing in Internet of Vehicles [J].
Chai, Haoye ;
Leng, Supeng ;
Chen, Yijin ;
Zhang, Ke .
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (07) :3975-3986
[9]  
Cui L., 2020, Proc. of IEEE Global Communications Conference (GLOBECOM), P1
[10]   Slashing Communication Traffic in Federated Learning by Transmitting Clustered Model Updates [J].
Cui, Laizhong ;
Su, Xiaoxin ;
Zhou, Yipeng ;
Pan, Yi .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (08) :2572-2589