Reducing Communication for Split Learning by Randomized Top-k Sparsification

被引:0
|
作者
Zheng, Fei [1 ]
Chen, Chaochao [1 ]
Lyu, Lingjuan [2 ]
Yao, Binhui [3 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Sony AI, Schlieren, Switzerland
[3] Midea Grp, Foshan, Peoples R China
来源
PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023 | 2023年
关键词
NETWORKS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Split learning is a simple solution for Vertical Federated Learning (VFL), which has drawn substantial attention in both research and application due to its simplicity and efficiency. However, communication efficiency is still a crucial issue for split learning. In this paper, we investigate multiple communication reduction methods for split learning, including cut layer size reduction, top-k sparsification, quantization, and L1 regularization. Through analysis of the cut layer size reduction and top-k sparsification, we further propose randomized top-k sparsification, to make the model generalize and converge better. This is done by selecting top-k elements with a large probability while also having a small probability to select non-top-k elements. Empirical results show that compared with other communication-reduction methods, our proposed randomized top-k sparsification achieves a better model performance under the same compression level.
引用
收藏
页码:4665 / 4673
页数:9
相关论文
共 50 条
  • [1] Joint Top-K Sparsification and Shuffle Model for Communication-Privacy-Accuracy Tradeoffs in Federated-Learning-Based IoV
    Sun, Kangkang
    Xu, Hansong
    Hua, Kun
    Lin, Xi
    Li, Gaolei
    Jiang, Tigang
    Li, Jianhua
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (11): : 19721 - 19735
  • [2] Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
    Ruan, Mengzhe
    Yan, Guangfeng
    Xiao, Yuanzhang
    Song, Linqi
    Xu, Weitao
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5280 - 5285
  • [3] Sort-then-insert: A space efficient and oblivious model aggregation algorithm for top-k sparsification in federated learning
    Wang, Yongzhi
    Gui, Pengfei
    Sookhak, Mehdi
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 158 : 1 - 10
  • [4] Differentiable Top-k Classification Learning
    Petersen, Felix
    Kuehne, Hilde
    Borgelt, Christian
    Deussen, Oliver
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [5] Learning with Average Top-k Loss
    Fan, Yanbo
    Lyu, Siwei
    Ying, Yiming
    Hu, Bao-Gang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [6] A Distributed Synchronous SGD Algorithm with Global Top-k Sparsification for Low Bandwidth Networks
    Shi, Shaohuai
    Wang, Qiang
    Zhao, Kaiyong
    Tang, Zhenheng
    Wang, Yuxin
    Huang, Xiang
    Chu, Xiaowen
    2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019), 2019, : 2238 - 2247
  • [7] Online Learning to Rank with Top-k Feedback
    Chaudhuri, Sougata
    Tewari, Ambuj
    JOURNAL OF MACHINE LEARNING RESEARCH, 2017, 18
  • [8] Meta Auxiliary Learning for Top-K Recommendation
    Li X.
    Ma C.
    Li G.
    Xu P.
    Liu C.H.
    Yuan Y.
    Wang G.
    IEEE Transactions on Knowledge and Data Engineering, 2023, 35 (10) : 10857 - 10870
  • [9] Communication Efficient Algorithms for Top-k Selection Problems
    Huebschle-Schneider, Lorenz
    Sanders, Peter
    2016 IEEE 30TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS 2016), 2016, : 659 - 668
  • [10] Mining top-k frequent patterns with combination reducing techniques
    Gwangbum Pyun
    Unil Yun
    Applied Intelligence, 2014, 41 : 76 - 98