Coded Distributed Computing for Hierarchical Multi-task Learning

被引:2
作者
Hu, Haoyang [1 ]
Li, Songze [2 ,3 ]
Cheng, Minquan [4 ]
Wu, Youlong [1 ]
机构
[1] ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai, Peoples R China
[2] Hong Kong Univ Sci & Technol, Thrust Internet Things, Hong Kong, Peoples R China
[3] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[4] Guangxi Normal Univ, Guilin, Peoples R China
来源
2023 IEEE INFORMATION THEORY WORKSHOP, ITW | 2023年
关键词
Multi-task learning; coded computing; distributed learning; hierarchical systems; communication load;
D O I
10.1109/ITW55543.2023.10161632
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we consider a hierarchical distributed multi-task learning (MTL) system where distributed users wish to jointly learn different models orchestrated by a central server with the help of a layer of multiple relays. Since the users need to download different learning models in the downlink transmission, the distributed MTL suffers more severely from the communication bottleneck compared to the single-task learning system. To address this issue, we propose a coded hierarchical MTL scheme that exploits the connection topology and introduces coding techniques to reduce communication loads. It is shown that the proposed scheme can significantly reduce the communication loads both in the uplink and downlink transmissions between relays and the server. Moreover, we provide information-theoretic lower bounds on the optimal uplink and downlink communication loads, and prove that the gaps between achievable upper bounds and lower bounds are within the minimum number of connected users among all relays.
引用
收藏
页码:480 / 485
页数:6
相关论文
共 15 条
[1]  
[Anonymous], 2009, Convex optimization
[2]   A New Look and Convergence Rate of Federated Multitask Learning With Laplacian Regularization [J].
Dinh, Canh T. ;
Vu, Tung T. ;
Tran, Nguyen H. ;
Dao, Minh N. ;
Zhang, Hongyu .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) :8075-8085
[3]  
Hu HY, 2022, Arxiv, DOI arXiv:2212.08236
[4]   A Fundamental Tradeoff Between Computation and Communication in Distributed Computing [J].
Li, Songze ;
Maddah-Ali, Mohammad Ali ;
Yu, Qian ;
Avestimehr, A. Salman .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018, 64 (01) :109-128
[5]   Client-Edge-Cloud Hierarchical Federated Learning [J].
Liu, Lumin ;
Chang, Jun ;
Song, S. H. ;
Letaief, Khaled B. .
ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
[6]   Distributed Multi-Task Relationship Learning [J].
Liu, Sulin ;
Pan, Sinno Jialin ;
Ho, Qirong .
KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2017, :937-946
[7]  
Ngai CK, 2004, 2004 IEEE INFORMATION THEORY WORKSHOP, PROCEEDINGS, P283
[8]  
Prakash S, 2020, IEEE INT SYMP INFO, P2616, DOI [10.1109/isit44484.2020.9174077, 10.1109/ISIT44484.2020.9174077]
[9]   Coded Gradient Aggregation: A Tradeoff Between Communication Costs at Edge Nodes and at Helper Nodes [J].
Sasidharan, Birenjith ;
Thomas, Anoop .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2022, 40 (03) :761-772
[10]   Communication-Efficient Edge AI: Algorithms and Systems [J].
Shi, Yuanming ;
Yang, Kai ;
Jiang, Tao ;
Zhang, Jun ;
Letaief, Khaled B. .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2020, 22 (04) :2167-2191