Multitask Learning

被引:4
|
作者
Rich Caruana
机构
[1] Carnegie Mellon University,School of Computer Science
来源
Machine Learning | 1997年 / 28卷
关键词
inductive transfer; parallel transfer; multitask learning; backpropagation; k-nearest neighbor; kernel regression; supervised learning; generalization;
D O I
暂无
中图分类号
学科分类号
摘要
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems.
引用
收藏
页码:41 / 75
页数:34
相关论文
共 50 条
  • [31] Semisupervised Multitask Learning With Gaussian Processes
    Skolidis, Grigorios
    Sanguinetti, Guido
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2013, 24 (12) : 2101 - 2112
  • [32] Multitask machine learning for financial forecasting
    Di Persio, Luca
    Honchar, Oleksandr
    International Journal of Circuits, Systems and Signal Processing, 2018, 12 : 444 - 451
  • [33] The perceptual costs and benefits of learning to multitask
    Webb, Ben S.
    McGraw, Paul V.
    Levi, Dennis M.
    Li, Roger W.
    PERCEPTION, 2015, 44 : 47 - 48
  • [34] A Multitask Deep Learning Framework for DNER
    Jin, Ran
    Hou, Tengda
    Yu, Tongrui
    Luo, Min
    Hu, Haoliang
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [35] Multitask learning for spoken language understanding
    Tur, Gokhan
    2006 IEEE International Conference on Acoustics, Speech and Signal Processing, Vols 1-13, 2006, : 585 - 588
  • [36] Multitask Learning for Object Localization With Deep Reinforcement Learning
    Wang, Yan
    Zhang, Lei
    Wang, Lituan
    Wang, Zizhou
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2019, 11 (04) : 573 - 580
  • [37] Multitask reinforcement learning on the distribution of MDPs
    Tanaka, F
    Yamamura, M
    2003 IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE IN ROBOTICS AND AUTOMATION, VOLS I-III, PROCEEDINGS, 2003, : 1108 - 1113
  • [38] Multitask Learning for Authenticity and Authorship Detection
    Chhatwal, Gurunameh Singh
    Zhao, Jiashu
    ELECTRONICS, 2025, 14 (06):
  • [39] Multitask Learning for Visual Question Answering
    Ma, Jie
    Liu, Jun
    Lin, Qika
    Wu, Bei
    Wang, Yaxian
    You, Yang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (03) : 1380 - 1394
  • [40] Online Multitask Relative Similarity Learning
    Hao, Shuji
    Zhao, Peilin
    Liu, Yong
    Hoi, Steven C. H.
    Miao, Chunyan
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1823 - 1829