Trainable Weights for Multitask Learning

被引:0
|
作者
Ryu, Chaeeun [1 ]
Lee, Changwoo [2 ,3 ]
Choi, Hyuk Jin [4 ]
Lee, Chang-Hyun [5 ]
Jeon, Byoungjun [6 ]
Chie, Eui Kyu [7 ,8 ]
Kim, Young-Gon [2 ,9 ]
机构
[1] Sungkyunkwan Univ, Dept Comp Educ, Seoul 03063, South Korea
[2] Seoul Natl Univ Hosp, Dept Transdisciplinary Med, Seoul 03080, South Korea
[3] Seoul Natl Univ, Dept Med Device Dev, Coll Med, Seoul 03080, South Korea
[4] Seoul Natl Univ Hosp Healthcare Syst Gangnam Ctr, Dept Ophthalmol, Seoul 06236, South Korea
[5] Seoul Natl Univ Hosp, Dept Neurosurg, Seoul 03080, South Korea
[6] Seoul Natl Univ, Dept Neurosurg, Coll Med, Seoul 03080, South Korea
[7] Seoul Natl Univ Hosp, Dept Radiat Oncol, Seoul 03080, South Korea
[8] Seoul Natl Univ, Dept Radiat Oncol, Coll Med, Seoul 03080, South Korea
[9] Seoul Natl Univ, Dept Med, Coll Med, Seoul 03080, South Korea
关键词
Auxiliary task learning; incremental learning multitask learning; trainable parameters;
D O I
10.1109/ACCESS.2023.3319072
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The research on multi-task learning has been steadily increasing due to its advantages, such as preventing overfitting, averting catastrophic forgetting, solving multiple inseparable tasks, and coping with data shortage. Here, we question whether to incorporate different orderings of feature levels based on distinct characteristics of tasks and their interrelationships in multitask learning. While in many classification tasks, leveraging the features extracted from the last layer is common, we thought that given different characteristics of tasks there might be a need to encompass different representation levels, i.e., different orderings of feature levels. Hence, we utilized the knowledge of different representation levels by features extracted from the various blocks of the main module and applied trainable parameters as weights on the features. This indicates that we optimized the solution to the question by learning to weigh the features in a task-specific manner and solving tasks with a combination of newly weighted features. Our method SimPara presents a modular topology of multitask learning that is efficient in terms of memory and computation, effective, and easily applicable to diverse tasks or models. To show that our approach is task-agnostic and highly applicable, we demonstrate its effectiveness in auxiliary task learning, active learning, and multilabel learning settings. This work underscores that by simply learning weights to better order the features learned by a single backbone, we can incur better task-specific performance of the model.
引用
收藏
页码:105633 / 105641
页数:9
相关论文
共 50 条
  • [31] A Principled Approach for Learning Task Similarity in Multitask Learning
    Shui, Changjian
    Abbasi, Mahdieh
    Robitaille, Louis-Emile
    Wang, Boyu
    Gagne, Christian
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3446 - 3452
  • [32] Weighted Task Regularization for Multitask Learning
    Liu, Yintao
    Wu, Anqi
    Guo, Dong
    Yao, Ke-Thia
    Raghavendra, Cauligi S.
    2013 IEEE 13TH INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2013, : 399 - 406
  • [33] Multitask Learning for Network Traffic Classification
    Rezaei, Shahbaz
    Liu, Xin
    2020 29TH INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2020), 2020,
  • [34] Sentiment and Sarcasm Classification With Multitask Learning
    Majumder, Navonil
    Poria, Soujanya
    Peng, Haiyun
    Chhaya, Niyati
    Cambria, Erik
    Gelbukh, Alexander
    IEEE INTELLIGENT SYSTEMS, 2019, 34 (03) : 38 - 43
  • [35] Provable Benefit of Multitask Representation Learning in Reinforcement Learning
    Cheng, Yuan
    Feng, Songtao
    Yang, Jing
    Zhang, Hong
    Liang, Yingbin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [36] Semisupervised Multitask Learning With Gaussian Processes
    Skolidis, Grigorios
    Sanguinetti, Guido
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2013, 24 (12) : 2101 - 2112
  • [37] Multitask machine learning for financial forecasting
    Di Persio, Luca
    Honchar, Oleksandr
    International Journal of Circuits, Systems and Signal Processing, 2018, 12 : 444 - 451
  • [38] The perceptual costs and benefits of learning to multitask
    Webb, Ben S.
    McGraw, Paul V.
    Levi, Dennis M.
    Li, Roger W.
    PERCEPTION, 2015, 44 : 47 - 48
  • [39] A Multitask Deep Learning Framework for DNER
    Jin, Ran
    Hou, Tengda
    Yu, Tongrui
    Luo, Min
    Hu, Haoliang
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [40] Multitask learning for spoken language understanding
    Tur, Gokhan
    2006 IEEE International Conference on Acoustics, Speech and Signal Processing, Vols 1-13, 2006, : 585 - 588