Asynchronous Multi-Task Learning

被引:0
|
作者
Baytas, Inci M. [1 ]
Yan, Ming [2 ,3 ]
Jain, Anil K. [1 ]
Zhou, Jiayu [1 ]
机构
[1] Michigan State Univ, Dept Comp Sci & Engn, E Lansing, MI 48824 USA
[2] Michigan State Univ, Dept Computat Math Sci & Engn, E Lansing, MI 48824 USA
[3] Michigan State Univ, Dept Math, E Lansing, MI 48824 USA
来源
2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM) | 2016年
基金
美国国家科学基金会;
关键词
THRESHOLDING ALGORITHM;
D O I
10.1109/ICDM.2016.61
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many real-world machine learning applications involve several learning tasks which are inter-related. For example, in healthcare domain, we need to learn a predictive model of a certain disease for many hospitals. The models for each hospital may be different because of the inherent differences in the distributions of the patient populations. However, the models are also closely related because of the nature of the learning tasks modeling the same disease. By simultaneously learning all the tasks, multi-task learning (MTL) paradigm performs inductive knowledge transfer among tasks to improve the generalization performance. When datasets for the learning tasks are stored at different locations, it may not always be feasible to transfer the data to provide a data-centralized computing environment due to various practical issues such as high data volume and privacy. In this paper, we propose a principled MTL framework for distributed and asynchronous optimization to address the aforementioned challenges. In our framework, gradient update does not wait for collecting the gradient information from all the tasks. Therefore, the proposed method is very efficient when the communication delay is too high for some task nodes. We show that many regularized MTL formulations can benefit from this framework, including the low-rank MTL for shared subspace learning. Empirical studies on both synthetic and real-world datasets demonstrate the efficiency and effectiveness of the proposed framework.
引用
收藏
页码:11 / 20
页数:10
相关论文
共 50 条
  • [1] Efficient Multi-Task Asynchronous Federated Learning in Edge Computing
    Cao, Xinyuan
    Ouyang, Tao
    Zhao, Kongyange
    Li, Yousheng
    Chen, Xu
    2024 IEEE/ACM 32ND INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE, IWQOS, 2024,
  • [2] Multi-task gradient descent for multi-task learning
    Lu Bai
    Yew-Soon Ong
    Tiantian He
    Abhishek Gupta
    Memetic Computing, 2020, 12 : 355 - 369
  • [3] Multi-task gradient descent for multi-task learning
    Bai, Lu
    Ong, Yew-Soon
    He, Tiantian
    Gupta, Abhishek
    MEMETIC COMPUTING, 2020, 12 (04) : 355 - 369
  • [4] Privacy-Preserving Distributed Multi-Task Learning with Asynchronous Updates
    Xie, Liyang
    Baytas, Inci M.
    Lin, Kaixiang
    Zhou, Jiayu
    KDD'17: PROCEEDINGS OF THE 23RD ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2017, : 1195 - 1204
  • [5] An Asynchronous Multi-Task Semantic Communication Method
    Tian, Zhiyi
    Vo, Hiep
    Zhang, Chenhan
    Min, Geyong
    Yu, Shui
    IEEE NETWORK, 2024, 38 (04): : 275 - 283
  • [6] Learning to Branch for Multi-Task Learning
    Guo, Pengsheng
    Lee, Chen-Yu
    Ulbricht, Daniel
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [7] Learning to Branch for Multi-Task Learning
    Guo, Pengsheng
    Lee, Chen-Yu
    Ulbricht, Daniel
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [8] Boosted multi-task learning
    Olivier Chapelle
    Pannagadatta Shivaswamy
    Srinivas Vadrevu
    Kilian Weinberger
    Ya Zhang
    Belle Tseng
    Machine Learning, 2011, 85 : 149 - 173
  • [9] An overview of multi-task learning
    Zhang, Yu
    Yang, Qiang
    NATIONAL SCIENCE REVIEW, 2018, 5 (01) : 30 - 43
  • [10] On Partial Multi-Task Learning
    He, Yi
    Wu, Baijun
    Wu, Di
    Wu, Xindong
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1174 - 1181