Asynchronous Multi-Task Learning

被引:0
|
作者
Baytas, Inci M. [1 ]
Yan, Ming [2 ,3 ]
Jain, Anil K. [1 ]
Zhou, Jiayu [1 ]
机构
[1] Michigan State Univ, Dept Comp Sci & Engn, E Lansing, MI 48824 USA
[2] Michigan State Univ, Dept Computat Math Sci & Engn, E Lansing, MI 48824 USA
[3] Michigan State Univ, Dept Math, E Lansing, MI 48824 USA
来源
2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM) | 2016年
基金
美国国家科学基金会;
关键词
THRESHOLDING ALGORITHM;
D O I
10.1109/ICDM.2016.61
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many real-world machine learning applications involve several learning tasks which are inter-related. For example, in healthcare domain, we need to learn a predictive model of a certain disease for many hospitals. The models for each hospital may be different because of the inherent differences in the distributions of the patient populations. However, the models are also closely related because of the nature of the learning tasks modeling the same disease. By simultaneously learning all the tasks, multi-task learning (MTL) paradigm performs inductive knowledge transfer among tasks to improve the generalization performance. When datasets for the learning tasks are stored at different locations, it may not always be feasible to transfer the data to provide a data-centralized computing environment due to various practical issues such as high data volume and privacy. In this paper, we propose a principled MTL framework for distributed and asynchronous optimization to address the aforementioned challenges. In our framework, gradient update does not wait for collecting the gradient information from all the tasks. Therefore, the proposed method is very efficient when the communication delay is too high for some task nodes. We show that many regularized MTL formulations can benefit from this framework, including the low-rank MTL for shared subspace learning. Empirical studies on both synthetic and real-world datasets demonstrate the efficiency and effectiveness of the proposed framework.
引用
收藏
页码:11 / 20
页数:10
相关论文
共 50 条
  • [31] Task-aware asynchronous multi-task model with class incremental contrastive learning for surgical scene understanding
    Seenivasan, Lalithkumar
    Islam, Mobarakol
    Xu, Mengya
    Lim, Chwee Ming
    Ren, Hongliang
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2023, 18 (05) : 921 - 928
  • [32] Task-aware asynchronous multi-task model with class incremental contrastive learning for surgical scene understanding
    Lalithkumar Seenivasan
    Mobarakol Islam
    Mengya Xu
    Chwee Ming Lim
    Hongliang Ren
    International Journal of Computer Assisted Radiology and Surgery, 2023, 18 : 921 - 928
  • [33] Multi-task Learning with Modular Reinforcement Learning
    Xue, Jianyong
    Alexandre, Frederic
    FROM ANIMALS TO ANIMATS 16, 2022, 13499 : 127 - 138
  • [34] Hierarchical Prompt Learning for Multi-Task Learning
    Liu, Yajing
    Lu, Yuning
    Liu, Hao
    An, Yaozu
    Xu, Zhuoran
    Yao, Zhuokun
    Zhang, Baofeng
    Xiong, Zhiwei
    Gui, Chenguang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 10888 - 10898
  • [35] Multi-task learning for gland segmentation
    Rezazadeh, Iman
    Duygulu, Pinar
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (01) : 1 - 9
  • [36] Editorial Note: Multi-Task Learning
    Zhu, Yingying
    Zhang, Shichao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (22) : 29207 - 29207
  • [37] A brief review on multi-task learning
    Kim-Han Thung
    Chong-Yaw Wee
    Multimedia Tools and Applications, 2018, 77 : 29705 - 29725
  • [38] Multi-task reinforcement learning in humans
    Momchil S. Tomov
    Eric Schulz
    Samuel J. Gershman
    Nature Human Behaviour, 2021, 5 : 764 - 773
  • [39] Convex multi-task feature learning
    Andreas Argyriou
    Theodoros Evgeniou
    Massimiliano Pontil
    Machine Learning, 2008, 73 : 243 - 272
  • [40] Fitting and sharing multi-task learning
    Piao, Chengkai
    Wei, Jinmao
    APPLIED INTELLIGENCE, 2024, 54 (9-10) : 6918 - 6929