Distributed Clustering for Cooperative Multi-Task Learning Networks

被引:4
作者
Li, Jiani [1 ]
Wang, Weihan [2 ]
Abbas, Waseem [3 ]
Koutsoukos, Xenofon [1 ]
机构
[1] Vanderbilt Univ, Dept Comp Sci, Nashville, TN 37235 USA
[2] Stevens Inst Technol, Dept Comp Sci, Hoboken, NJ 07030 USA
[3] Univ Texas Dallas, Dept Syst Engn, Richardson, TX 75080 USA
来源
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING | 2023年 / 10卷 / 06期
关键词
Distributed clustering; distributed cooperative learning; multi-agent networks; multi-task learning; OPTIMIZATION; ADAPTATION; ALGORITHMS;
D O I
10.1109/TNSE.2023.3276854
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Distributed learning enables collaborative training of machine learning models across multiple agents by exchanging model parameters without sharing local data. Each agent generates data from distinct but related distributions, and multi-task learning can be effectively used to model related tasks. This article focuses on clustered multi-task learning, where agents are partitioned into clusters with distinct objectives, and agents in the same cluster share the same objective. The structure of such clusters is unknown apriori. Cooperation with the agents in the same cluster is beneficial and improves the overall learning performance. However, indiscriminate cooperation of agents with different objectives leads to undesired outcomes. Accurately capturing the clustering structure benefits the cooperation and offers many practical benefits; for instance, it helps advertising companies better target their ads. This article proposes an adaptive clustering method that allows distributed agents to learn the most appropriate neighbors to collaborate with and form clusters. We prove the convergence of every agent towards its objective and analyze the network learning performance using the proposed clustering method. Further, we present a method of computing combination weights that approximately optimizes the network's learning performance to determine how one should aggregate the neighbors' model parameters after the clustering step. The theoretical analysis is well-validated by the evaluation results using target localization and digits classification, showing that the proposed clustering method outperforms existing distributed clustering methods as well as the case where agents do not cooperate.
引用
收藏
页码:3933 / 3942
页数:10
相关论文
共 44 条
[11]  
Hashimoto K., 2017, EMNLP 2017, P1923, DOI 10.18653/v1/D17-1206
[12]  
Jie Chen, 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), P5487, DOI 10.1109/ICASSP.2014.6854652
[13]   An efficient k-means clustering algorithm:: Analysis and implementation [J].
Kanungo, T ;
Mount, DM ;
Netanyahu, NS ;
Piatko, CD ;
Silverman, R ;
Wu, AY .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2002, 24 (07) :881-892
[14]   Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics [J].
Kendall, Alex ;
Gal, Yarin ;
Cipolla, Roberto .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :7482-7491
[15]  
Khawatmi S, 2015, EUR SIGNAL PR CONF, P2696, DOI 10.1109/EUSIPCO.2015.7362874
[16]  
Konečny J, 2017, Arxiv, DOI [arXiv:1610.05492, 10.48550/arXiv.1610.05492, DOI 10.48550/ARXIV.1610.05492]
[17]  
Li J., 2020, arXiv preprint arXiv:2008.08171, P6
[18]  
Long Mingsheng, 2017, Advances in Neural Information Processing Systems, V30
[19]   DeepAutoD: Research on Distributed Machine Learning Oriented Scalable Mobile Communication Security Unpacking System [J].
Lu, Hui ;
Jin, Chengjie ;
Helu, Xiaohan ;
Du, Xiaojiang ;
Guizani, Mohsen ;
Tian, Zhihong .
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (04) :2052-2065
[20]   Cross-stitch Networks for Multi-task Learning [J].
Misra, Ishan ;
Shrivastava, Abhinav ;
Gupta, Abhinav ;
Hebert, Martial .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3994-4003