Communication and Computation Reduction for Split Learning using Asynchronous Training

被引:17
作者
Chen, Xing [1 ]
Li, Jingtao [1 ]
Chakrabarti, Chaitali [1 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85281 USA
来源
2021 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS 2021) | 2021年
关键词
Split learning; Communication reduction; Asynchronous training; Quantization;
D O I
10.1109/SiPS52927.2021.00022
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Split learning is a promising privacy-preserving distributed learning scheme that has low computation requirement at the edge device but has the disadvantage of high communication overhead between edge device and server. To reduce the communication overhead, this paper proposes a loss-based asynchronous training scheme that updates the client-side model less frequently and only sends/receives activations/gradients in selected epochs. To further reduce the communication overhead, the activations/gradients are quantized using 8-bit floating point prior to transmission. An added benefit of the proposed communication reduction method is that the computations at the client side are reduced due to reduction in the number of client model updates. Furthermore, the privacy of the proposed communication reduction based split learning method is almost the same as traditional split learning. Simulation results on VGG11, VGG13 and ResNetl8 models on CIFAR-10 show that the communication cost is reduced by 1.64x-106.7x and the computations in the client are reduced by 2.86x-32.1x when the accuracy degradation is less than 0.5% for the single-client case. For 5 and 10-client cases, the communication cost reduction is 11.9x and 11.3x on VGG11 for 0.5% loss in accuracy.
引用
收藏
页码:76 / 81
页数:6
相关论文
共 17 条
  • [11] Sun XM, 2020, ADV NEUR IN, V33
  • [12] Thapa C., ARXIV PREPRINT ARXIV
  • [13] Vepakomma P., 2018, ARXIV181203288
  • [14] Vepakomma P., 2020, ARXIV PREPRINT ARXIV
  • [15] Wagh Sameer, 2019, Proceedings on Privacy Enhancing Technologies, V2019, P26, DOI 10.2478/popets-2019-0035
  • [16] Wang Naigang, 2019, ADV NEUR IN, P4901
  • [17] The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks
    Zhang, Yuheng
    Jia, Ruoxi
    Pei, Hengzhi
    Wang, Wenxiao
    Li, Bo
    Song, Dawn
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 250 - 258