Multi-task gradient descent for multi-task learning

被引:16
作者
Bai, Lu [1 ]
Ong, Yew-Soon [1 ]
He, Tiantian [1 ]
Gupta, Abhishek [2 ]
机构
[1] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
[2] Agcy Sci Technol & Res, Singapore Inst Mfg Technol, Singapore, Singapore
基金
新加坡国家研究基金会;
关键词
Multi-task gradient descent; Knowledge transfer; Multi-task learning; Multi-label learning; LABEL; CLASSIFICATION;
D O I
10.1007/s12293-020-00316-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-Task Learning (MTL) aims to simultaneously solve a group of related learning tasks by leveraging the salutary knowledge memes contained in the multiple tasks to improve the generalization performance. Many prevalent approaches focus on designing a sophisticated cost function, which integrates all the learning tasks and explores the task-task relationship in a predefined manner. Different from previous approaches, in this paper, we propose a novel Multi-task Gradient Descent (MGD) framework, which improves the generalization performance of multiple tasks through knowledge transfer. The uniqueness of MGD lies in assuming individual task-specific learning objectives at the start, but with the cost functionsimplicitlychanging during the course of parameter optimization based on task-task relationships. Specifically, MGD optimizes the individual cost function of each task using a reformative gradient descent iteration, where relations to other tasks are facilitated through effectively transferring parameter values (serving as the computational representations of memes) from other tasks. Theoretical analysis shows that the proposed framework is convergent under any appropriate transfer mechanism. Compared with existing MTL approaches, MGD provides a novel easy-to-implement framework for MTL, which can mitigate negative transfer in the learning procedure by asymmetric transfer. The proposed MGD has been compared with both classical and state-of-the-art approaches on multiple MTL datasets. The competitive experimental results validate the effectiveness of the proposed algorithm.
引用
收藏
页码:355 / 369
页数:15
相关论文
共 55 条
  • [1] [Anonymous], 2007, Multi-Task Feature Learning, DOI DOI 10.7551/MITPRESS/7503.003.0010
  • [2] Multifactorial Evolutionary Algorithm With Online Transfer Parameter Estimation: MFEA-II
    Bali, Kavitesh Kumar
    Ong, Yew Soon
    Gupta, Abhishek
    Tan, Puay Siew
    [J]. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2020, 24 (01) : 69 - 83
  • [3] Basar T., 1999, DYNAMIC NONCOOPERATI, V23
  • [4] Learning multi-label scene classification
    Boutell, MR
    Luo, JB
    Shen, XP
    Brown, CM
    [J]. PATTERN RECOGNITION, 2004, 37 (09) : 1757 - 1771
  • [5] Chen D, 2009, PROCEEDINGS OF 2009 INTERNATIONAL CONFERENCE OF MANAGEMENT SCIENCE AND INFORMATION SYSTEM, VOLS 1-4, P1375
  • [6] Guest Editorial: Special Issue on New Advances in Deep-Transfer Learning
    Deng, Z.
    Lu, J.
    Wu, D.
    Choi, K. -S.
    Sun, S.
    Nojima, Y.
    [J]. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2019, 3 (05): : 357 - 359
  • [7] Multifactorial evolutionary algorithm for solving clustered tree problems: competition among Cayley codes Case studies on the clustered shortest-path tree problem and the minimum inter-cluster routing cost clustered tree problem
    Dinh, Thanh Pham
    Thanh, Binh Huynh Thi
    Ba, Trung Tran
    Binh, Long Nguyen
    [J]. MEMETIC COMPUTING, 2020, 12 (03) : 185 - 217
  • [8] Dong DX, 2015, PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1, P1723
  • [9] Duong L, 2015, PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL) AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (IJCNLP), VOL 2, P845
  • [10] Deep memetic models for combinatorial optimization problems: application to the tool switching problem
    Edgar Amaya, Jhon
    Cotta, Carlos
    Fernandez-Leiva, Antonio J.
    Garcia-Sanchez, Pablo
    [J]. MEMETIC COMPUTING, 2020, 12 (01) : 3 - 22