Communication-Efficient Federated Multitask Learning Over Wireless Networks

被引:19
作者
Ma, Haoyu [1 ]
Guo, Huayan [1 ,2 ]
Lau, Vincent K. N. [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[2] Hong Kong Univ Sci & Technol, Shenzhen Res Inst, Shenzhen 518000, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated multitask learning (FMTL); Lyapunov analysis; user scheduling; wireless federated learning (FL); CONVERGENCE;
D O I
10.1109/JIOT.2022.3201310
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This article investigates the scheduling framework of the federated multitask learning (FMTL) problem with a hard-cooperation structure over wireless networks, in which the scheduling becomes more challenging due to the different convergence behaviors of different tasks. Based on the special model structure, we propose a dynamic user and task scheduling scheme with a block-wise incremental gradient aggregation algorithm, in which the neural network model is decomposed into a common feature-extraction module and M task-specific modules. Different block gradients with respect to different modules can be scheduled separately. We further propose a Lyapunov-drift-based scheduling scheme that minimizes the overall communication latency by utilizing both the instantaneous data importance and the channel state information. We prove that the proposed scheme can converge almost surely to a KKT solution of the training problem such that the data-distortion issue is resolved. Simulation results illustrate that the proposed scheme significantly reduces the communication latency compared to the state-of-the-art baseline schemes.
引用
收藏
页码:609 / 624
页数:16
相关论文
共 51 条
[21]  
Lin Yujun., 2020, Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
[22]   Distributed Multi-Task Relationship Learning [J].
Liu, Sulin ;
Pan, Sinno Jialin ;
Ho, Qirong .
KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2017, :937-946
[23]   Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts [J].
Ma, Jiaqi ;
Zhao, Zhe ;
Yi, Xinyang ;
Chen, Jilin ;
Hong, Lichan ;
Chi, Ed H. .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :1930-1939
[24]   Federated Learning for Internet of Things: A Comprehensive Survey [J].
Nguyen, Dinh C. ;
Ding, Ming ;
Pathirana, Pubudu N. ;
Seneviratne, Aruna ;
Li, Jun ;
Poor, H. Vincent .
IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2021, 23 (03) :1622-1658
[25]  
Nishio T, 2019, IEEE ICC, DOI 10.46660/ijeeg.vol0.iss0.0.82
[26]   HyperFace: A Deep Multi-Task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition [J].
Ranjan, Rajeev ;
Patel, Vishal M. ;
Chellappa, Rama .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (01) :121-135
[27]   Multi-Task Learning with Neural Networks for Voice Query Understanding on an Entertainment Platform [J].
Rao, Jinfeng ;
Ture, Ferhan ;
Lin, Jimmy .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :636-645
[28]   Scheduling for Cellular Federated Edge Learning With Importance and Channel Awareness [J].
Ren, Jinke ;
He, Yinghui ;
Wen, Dingzhu ;
Yu, Guanding ;
Huang, Kaibin ;
Guo, Dongning .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2020, 19 (11) :7690-7703
[29]  
Ruder S, 2017, Arxiv, DOI [arXiv:1706.05098, 10.48550/arXiv.1706.05098]
[30]   Joint Device Scheduling and Resource Allocation for Latency Constrained Wireless Federated Learning [J].
Shi, Wenqi ;
Zhou, Sheng ;
Niu, Zhisheng ;
Jiang, Miao ;
Geng, Lu .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (01) :453-467