TORR: A Lightweight Blockchain for Decentralized Federated Learning

被引:11
作者
Ma, Xuyang [1 ]
Xu, Du [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 610054, Peoples R China
关键词
Federated learning; Consensus protocol; Servers; Reliability; Performance evaluation; Training; Data models; Blockchain; consensus; federated learning (FL); storage; SYSTEM;
D O I
10.1109/JIOT.2023.3288078
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) has received considerable attention because it allows multiple devices to train models locally without revealing sensitive data. Well-trained local models are transmitted to a parameter server for further aggregation. The dependence on a trusted central server makes FL vulnerable to the single point of failure or attack. Blockchain is regarded as a state-of-the-art solution to decentralize the central server and provide attractive features simultaneously, such as immutability, traceability, and accountability. However, current popular blockchain systems cannot be combined with FL seamlessly. Since all local models should be collected before aggregation, the latency of FL is determined by the slowest device. The consensus process required by blockchain will increase the latency further, especially, when a large block is required for including the model. Moreover, forever-growing blockchain together with models will take up a lot of storage space, making it impractical to be deployed on lightweight devices. To address these problems, we propose a lightweight blockchain TORR for FL. A novel consensus protocol Proof of Reliability is designed to achieve fast consensus while mitigating the impact of stragglers. A storage protocol is designed based on erasure coding and periodic storage refreshing policy. With erasure coding, we take full advantage of the limited storage space of devices. With the periodic storage refreshing policy, we reduce the requirement for storage. Compared to the common blockchain-based FL system, TORR reduces the system latency, overall storage overhead, and peak storage overhead by up to 62%, 75.44%, and 51.77%, respectively.
引用
收藏
页码:1028 / 1040
页数:13
相关论文
共 56 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Bach LM, 2018, 2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), P1545, DOI 10.23919/MIPRO.2018.8400278
  • [3] Erasure coding for distributed storage: an overview
    Balaji, S. B.
    Krishnan, M. Nikhil
    Vajha, Myna
    Ramkumar, Vinayak
    Sasidharan, Birenjith
    Kumar, P. Vijay
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2018, 61 (10)
  • [4] Benet Juan, 2014, arXiv
  • [5] Biggio B., 2012, P 29 INT C MACH LEAR, P1467
  • [6] Blanchard P, 2017, ADV NEUR IN, V30
  • [7] Practical Secure Aggregation for Privacy-Preserving Machine Learning
    Bonawitz, Keith
    Ivanov, Vladimir
    Kreuter, Ben
    Marcedone, Antonio
    McMahan, H. Brendan
    Patel, Sarvar
    Ramage, Daniel
    Segal, Aaron
    Seth, Karn
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 1175 - 1191
  • [8] Bonawitz Keith, 2019, SysML 2019, DOI DOI 10.48550/ARXIV.1902.01046
  • [9] Chen H, 2021, Arxiv, DOI arXiv:2101.03300
  • [10] Doan T. V., 2022, arXiv