Multitask Learning

被引:4
|
作者
Rich Caruana
机构
[1] Carnegie Mellon University,School of Computer Science
来源
Machine Learning | 1997年 / 28卷
关键词
inductive transfer; parallel transfer; multitask learning; backpropagation; k-nearest neighbor; kernel regression; supervised learning; generalization;
D O I
暂无
中图分类号
学科分类号
摘要
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems.
引用
收藏
页码:41 / 75
页数:34
相关论文
共 50 条
  • [1] Learning to Multitask
    Zhang, Yu
    Wei, Ying
    Yang, Qiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [2] Multitask learning
    Caruana, R
    MACHINE LEARNING, 1997, 28 (01) : 41 - 75
  • [3] Online multitask learning
    Dekel, Ofer
    Long, Philip M.
    Singer, Yoram
    LEARNING THEORY, PROCEEDINGS, 2006, 4005 : 453 - 467
  • [4] Multitask Coactive Learning
    Goetschalckx, Robby
    Fern, Alan
    Tadepalli, Prasad
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3518 - 3524
  • [5] Semisupervised Multitask Learning
    Liu, Qiuhua
    Liao, Xuejun
    Li, Hui
    Stack, Jason R.
    Carin, Lawrence
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2009, 31 (06) : 1074 - 1086
  • [6] Trainable Weights for Multitask Learning
    Ryu, Chaeeun
    Lee, Changwoo
    Choi, Hyuk Jin
    Lee, Chang-Hyun
    Jeon, Byoungjun
    Chie, Eui Kyu
    Kim, Young-Gon
    IEEE ACCESS, 2023, 11 (105633-105641) : 105633 - 105641
  • [7] On Multiplicative Multitask Feature Learning
    Wang, Xin
    Bi, Jinbo
    Yu, Shipeng
    Sun, Jiangwen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [8] HIERARCHICAL MULTITASK LEARNING WITH CTC
    Sanabria, Ramon
    Metze, Florian
    2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018), 2018, : 485 - 490
  • [9] Conic Programming for Multitask Learning
    Kato, Tsuyoshi
    Kashima, Hisahi
    Sugiyama, Masashi
    Asai, Kiyoshi
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2010, 22 (07) : 957 - 968
  • [10] A dozen tricks with multitask learning
    Caruana, Rich
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012, 7700 LECTURE NO : 163 - 189