Asynchronous Multi-Task Learning

被引:0
|
作者
Baytas, Inci M. [1 ]
Yan, Ming [2 ,3 ]
Jain, Anil K. [1 ]
Zhou, Jiayu [1 ]
机构
[1] Michigan State Univ, Dept Comp Sci & Engn, E Lansing, MI 48824 USA
[2] Michigan State Univ, Dept Computat Math Sci & Engn, E Lansing, MI 48824 USA
[3] Michigan State Univ, Dept Math, E Lansing, MI 48824 USA
来源
2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM) | 2016年
基金
美国国家科学基金会;
关键词
THRESHOLDING ALGORITHM;
D O I
10.1109/ICDM.2016.61
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many real-world machine learning applications involve several learning tasks which are inter-related. For example, in healthcare domain, we need to learn a predictive model of a certain disease for many hospitals. The models for each hospital may be different because of the inherent differences in the distributions of the patient populations. However, the models are also closely related because of the nature of the learning tasks modeling the same disease. By simultaneously learning all the tasks, multi-task learning (MTL) paradigm performs inductive knowledge transfer among tasks to improve the generalization performance. When datasets for the learning tasks are stored at different locations, it may not always be feasible to transfer the data to provide a data-centralized computing environment due to various practical issues such as high data volume and privacy. In this paper, we propose a principled MTL framework for distributed and asynchronous optimization to address the aforementioned challenges. In our framework, gradient update does not wait for collecting the gradient information from all the tasks. Therefore, the proposed method is very efficient when the communication delay is too high for some task nodes. We show that many regularized MTL formulations can benefit from this framework, including the low-rank MTL for shared subspace learning. Empirical studies on both synthetic and real-world datasets demonstrate the efficiency and effectiveness of the proposed framework.
引用
收藏
页码:11 / 20
页数:10
相关论文
共 50 条
  • [21] Learning Multi-Level Task Groups in Multi-Task Learning
    Han, Lei
    Zhang, Yu
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 2638 - 2644
  • [22] Asynchronous Convergence in Multi-Task Learning via Knowledge Distillation from Converged Tasks
    Lu, Weiyi
    Rajagopalan, Sunny
    Nigam, Priyanka
    Singh, Jaspreet
    Sun, Xiaodi
    Xu, Yi
    Zeng, Belinda
    Chilimbi, Trishul
    2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, NAACL-HLT 2022, 2022, : 149 - 159
  • [23] MULTI-TASK DISTILLATION: TOWARDS MITIGATING THE NEGATIVE TRANSFER IN MULTI-TASK LEARNING
    Meng, Ze
    Yao, Xin
    Sun, Lifeng
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 389 - 393
  • [24] Task Variance Regularized Multi-Task Learning
    Mao, Yuren
    Wang, Zekai
    Liu, Weiwei
    Lin, Xuemin
    Hu, Wenbin
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (08) : 8615 - 8629
  • [25] Task Switching Network for Multi-task Learning
    Sun, Guolei
    Probst, Thomas
    Paudel, Danda Pani
    Popovic, Nikola
    Kanakis, Menelaos
    Patel, Jagruti
    Dai, Dengxin
    Van Gool, Luc
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 8271 - 8280
  • [26] Multi-Task Multi-Sample Learning
    Aytar, Yusuf
    Zisserman, Andrew
    COMPUTER VISION - ECCV 2014 WORKSHOPS, PT III, 2015, 8927 : 78 - 91
  • [27] Learning Task Relational Structure for Multi-Task Feature Learning
    Wang, De
    Nie, Feiping
    Huang, Heng
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 1239 - 1244
  • [28] Learning Task Relatedness in Multi-Task Learning for Images in Context
    Strezoski, Gjorgji
    van Noord, Nanne
    Worring, Marcel
    ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, : 78 - 86
  • [29] Learning Tree Structure in Multi-Task Learning
    Han, Lei
    Zhang, Yu
    KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 397 - 406
  • [30] Learning to Resolve Conflicts in Multi-Task Learning
    Tang, Min
    Jin, Zhe
    Zou, Lixin
    Liang Shiuan-Ni
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 477 - 489