Efficacy of Regularized Multitask Learning Based on SVM Models

被引:7
|
作者
Chen, Shaohan [1 ]
Fang, Zhou [2 ]
Lu, Sijie [1 ]
Gao, Chuanhou [1 ]
机构
[1] Zhejiang Univ, Sch Math Sci, Hangzhou 310027, Peoples R China
[2] Swiss Fed Inst Technol, Dept Biosyst Sci & Engn, CH-8092 Zurich, Switzerland
基金
中国国家自然科学基金;
关键词
Task analysis; Convergence; Support vector machines; Kernel; Upper bound; Particle measurements; Medical services; Error analysis; learning theory; multitask learning (MTL); preconvergence-rate (PCR) factor; regularization method; SUPPORT VECTOR MACHINE; MULTIPLE TASKS; CLASSIFICATION; ERROR;
D O I
10.1109/TCYB.2022.3196308
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This article investigates the efficacy of a regularized multitask learning (MTL) framework based on SVM (M-SVM) to answer whether MTL always provides reliable results and how MTL outperforms independent learning. We first find that the M-SVM is Bayes risk consistent in the limit of a large sample size. This implies that despite the task dissimilarities, the M-SVM always produces a reliable decision rule for each task in terms of the misclassification error when the data size is large enough. Furthermore, we find that the task-interaction vanishes as the data size goes to infinity, and the convergence rates of the M-SVM and its single-task counterpart have the same upper bound. The former suggests that the M-SVM cannot improve the limit classifier's performance; based on the latter, we conjecture that the optimal convergence rate is not improved when the task number is fixed. As a novel insight into MTL, our theoretical and experimental results achieved an excellent agreement that the benefit of the MTL methods lies in the improvement of the preconvergence-rate (PCR) factor (to be denoted in Section III) rather than the convergence rate. Moreover, this improvement of PCR factors is more significant when the data size is small. In addition, our experimental results of five other MTL methods demonstrate the generality of this new insight.
引用
收藏
页码:1339 / 1352
页数:14
相关论文
共 50 条
  • [1] Distributed Variance Regularized Multitask Learning
    Donini, Michele
    Martinez-Rego, David
    Goodson, Martin
    Shawe-Taylor, John
    Pontil, Massimiliano
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 3101 - 3109
  • [2] Generalized SMO Algorithm for SVM-Based Multitask Learning
    Cai, Feng
    Cherkassky, Vladimir
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (06) : 997 - 1003
  • [3] Multitask Learning Using Regularized Multiple Kernel Learning
    Gonen, Mehmet
    Kandemir, Melih
    Kaski, Samuel
    NEURAL INFORMATION PROCESSING, PT II, 2011, 7063 : 500 - 509
  • [4] Least Square Regularized Regression for Multitask Learning
    Xu, Yong-Li
    Chen, Di-Rong
    Li, Han-Xiong
    ABSTRACT AND APPLIED ANALYSIS, 2013,
  • [5] Multimodal Web Aesthetics Assessment Based on Structural SVM and Multitask Fusion Learning
    Wu, Ou
    Zuo, Haiqiang
    Hu, Weiming
    Li, Bing
    IEEE TRANSACTIONS ON MULTIMEDIA, 2016, 18 (06) : 1062 - 1076
  • [6] Multitask SVM learning for Remote Sensing Data Classification
    Leiva-Murillo, Jose M.
    Gomez-Chova, Luis
    Camps-Valls, Gustavo
    IMAGE AND SIGNAL PROCESSING FOR REMOTE SENSING XVI, 2010, 7830
  • [7] Manifold Regularized Multitask Feature Learning for Multimodality Disease Classification
    Jie, Biao
    Zhang, Daoqiang
    Cheng, Bo
    Shen, Dinggang
    HUMAN BRAIN MAPPING, 2015, 36 (02) : 489 - 507
  • [8] A new accelerated proximal gradient technique for regularized multitask learning framework
    Verma, Mridula
    Shukla, K. K.
    PATTERN RECOGNITION LETTERS, 2017, 95 : 98 - 103
  • [9] Regularized Evolutionary Multitask Optimization: Learning to Intertask Transfer in Aligned Subspace
    Tang, Zedong
    Gong, Maoguo
    Wu, Yue
    Liu, Wenfeng
    Xie, Yu
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2021, 25 (02) : 262 - 276
  • [10] Regularized Multitask Learning for Multidimensional Log-Density Gradient Estimation
    Yamane, Ikko
    Sasaki, Hiroaki
    Sugiyama, Masashi
    NEURAL COMPUTATION, 2016, 28 (07) : 1388 - 1410