Trainable Weights for Multitask Learning

被引:0
|
作者
Ryu, Chaeeun [1 ]
Lee, Changwoo [2 ,3 ]
Choi, Hyuk Jin [4 ]
Lee, Chang-Hyun [5 ]
Jeon, Byoungjun [6 ]
Chie, Eui Kyu [7 ,8 ]
Kim, Young-Gon [2 ,9 ]
机构
[1] Sungkyunkwan Univ, Dept Comp Educ, Seoul 03063, South Korea
[2] Seoul Natl Univ Hosp, Dept Transdisciplinary Med, Seoul 03080, South Korea
[3] Seoul Natl Univ, Dept Med Device Dev, Coll Med, Seoul 03080, South Korea
[4] Seoul Natl Univ Hosp Healthcare Syst Gangnam Ctr, Dept Ophthalmol, Seoul 06236, South Korea
[5] Seoul Natl Univ Hosp, Dept Neurosurg, Seoul 03080, South Korea
[6] Seoul Natl Univ, Dept Neurosurg, Coll Med, Seoul 03080, South Korea
[7] Seoul Natl Univ Hosp, Dept Radiat Oncol, Seoul 03080, South Korea
[8] Seoul Natl Univ, Dept Radiat Oncol, Coll Med, Seoul 03080, South Korea
[9] Seoul Natl Univ, Dept Med, Coll Med, Seoul 03080, South Korea
关键词
Auxiliary task learning; incremental learning multitask learning; trainable parameters;
D O I
10.1109/ACCESS.2023.3319072
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The research on multi-task learning has been steadily increasing due to its advantages, such as preventing overfitting, averting catastrophic forgetting, solving multiple inseparable tasks, and coping with data shortage. Here, we question whether to incorporate different orderings of feature levels based on distinct characteristics of tasks and their interrelationships in multitask learning. While in many classification tasks, leveraging the features extracted from the last layer is common, we thought that given different characteristics of tasks there might be a need to encompass different representation levels, i.e., different orderings of feature levels. Hence, we utilized the knowledge of different representation levels by features extracted from the various blocks of the main module and applied trainable parameters as weights on the features. This indicates that we optimized the solution to the question by learning to weigh the features in a task-specific manner and solving tasks with a combination of newly weighted features. Our method SimPara presents a modular topology of multitask learning that is efficient in terms of memory and computation, effective, and easily applicable to diverse tasks or models. To show that our approach is task-agnostic and highly applicable, we demonstrate its effectiveness in auxiliary task learning, active learning, and multilabel learning settings. This work underscores that by simply learning weights to better order the features learned by a single backbone, we can incur better task-specific performance of the model.
引用
收藏
页码:105633 / 105641
页数:9
相关论文
共 50 条
  • [41] Multitask Learning for Object Localization With Deep Reinforcement Learning
    Wang, Yan
    Zhang, Lei
    Wang, Lituan
    Wang, Zizhou
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2019, 11 (04) : 573 - 580
  • [42] Multitask reinforcement learning on the distribution of MDPs
    Tanaka, F
    Yamamura, M
    2003 IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE IN ROBOTICS AND AUTOMATION, VOLS I-III, PROCEEDINGS, 2003, : 1108 - 1113
  • [43] Multitask Learning for Authenticity and Authorship Detection
    Chhatwal, Gurunameh Singh
    Zhao, Jiashu
    ELECTRONICS, 2025, 14 (06):
  • [44] Multitask Learning for Visual Question Answering
    Ma, Jie
    Liu, Jun
    Lin, Qika
    Wu, Bei
    Wang, Yaxian
    You, Yang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (03) : 1380 - 1394
  • [45] Online Multitask Relative Similarity Learning
    Hao, Shuji
    Zhao, Peilin
    Liu, Yong
    Hoi, Steven C. H.
    Miao, Chunyan
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 1823 - 1829
  • [46] Towards Combining Multitask and Multilingual Learning
    Pikuliak, Matus
    Simko, Marian
    Bielikova, Maria
    THEORY AND PRACTICE OF COMPUTER SCIENCE, SOFSEM 2019, 2019, 11376 : 435 - 446
  • [47] Multitask Learning for Sparse Failure Prediction
    Luo, Simon
    Chu, Victor W.
    Li, Zhidong
    Wang, Yang
    Zhou, Jianlong
    Chen, Fang
    Wong, Raymond K.
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2019, PT I, 2019, 11439 : 3 - 14
  • [48] Bioinspired Architecture Selection for Multitask Learning
    Bueno-Crespo, Andres
    Menchon-Lara, Rosa-Maria
    Martinez-Espana, Raquel
    Sancho-Gomez, Jose-Luis
    FRONTIERS IN NEUROINFORMATICS, 2017, 11
  • [49] Is Multitask Deep Learning Practical for Pharma?
    Ramsundar, Bharath
    Liu, Bowen
    Wu, Zhenqin
    Verras, Andreas
    Tudor, Matthew
    Sheridan, Robert P.
    Pande, Vijay
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2017, 57 (08) : 2068 - 2076
  • [50] Multitask Learning or Transfer Learning? Application to Cancer Detection
    Obonyo, Stephen
    Ruiru, Daniel
    IJCCI: PROCEEDINGS OF THE 11TH INTERNATIONAL JOINT CONFERENCE ON COMPUTATIONAL INTELLIGENCE, 2019, : 548 - 555