Learning rates of multitask kernel methods

被引:0
|
作者
Sun, Haoming [1 ]
Zhang, Haizhang [1 ,2 ]
机构
[1] Sun Yat sen Univ, Sch Math Zhuhai, Zhuhai 519082, Peoples R China
[2] Sun Yat sen Univ, Guangdong Prov Key Lab Computat Sci, Guangzhou 510000, Peoples R China
基金
中国国家自然科学基金;
关键词
coefficient regularization; learning rates; matrix-valued reproducing kernels; multitask learning; regularization networks; vector-valued reproducing kernel Hilbert spaces; HILBERT-SPACES; VECTOR; THEOREM;
D O I
10.1002/mma.9176
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Learning theory aims at building a solid mathematical foundation for machine learning. A core objective of learning theory is to estimate the learning rates of various learning algorithms in order to analyze their generalization ability. By far, most such research efforts have been focusing on single-task kernel methods. There is little parallel work on learning rates of multitask kernel methods. We shall present an analysis of the learning rates for multitask regularization networks and l2-norm coefficient regularization. Compared to the existing work on learning rate estimates of multitask regularization networks, our study is more applicable in that we do not require the regression function to lie in the vector-valued reproducing kernel Hilbert space of the chosen matrix-valued reproducing kernel. Our work on the learning rate of multitask l2-norm coefficient regularization is new. For both methods, our results reveal a quantitative dependency of the learning rates on the number of tasks, which is also new in the literature.
引用
收藏
页码:11212 / 11228
页数:17
相关论文
共 50 条
  • [31] Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel
    Lin, Shaobo
    Zeng, Jinshan
    Fang, Jian
    Xu, Zongben
    NEURAL COMPUTATION, 2014, 26 (10) : 2350 - 2378
  • [32] Learning to Multitask
    Zhang, Yu
    Wei, Ying
    Yang, Qiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [33] Multitask learning
    Caruana, R
    MACHINE LEARNING, 1997, 28 (01) : 41 - 75
  • [34] lp-lq Penalty for Sparse Linear and Sparse Multiple Kernel Multitask Learning
    Rakotomamonjy, Alain
    Flamary, Remi
    Gasso, Gilles
    Canu, Stephane
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (08): : 1307 - 1320
  • [35] Multitask Learning
    Rich Caruana
    Machine Learning, 1997, 28 : 41 - 75
  • [36] Forecasting foreign exchange rates using kernel methods
    Sewell, Martin
    Shawe-Taylor, John
    EXPERT SYSTEMS WITH APPLICATIONS, 2012, 39 (09) : 7652 - 7662
  • [37] On the Expressive Power of Kernel Methods and the Efficiency of Kernel Learning by Association Schemes
    Kothari, Pravesh K.
    Livni, Roi
    ALGORITHMIC LEARNING THEORY, VOL 117, 2020, 117 : 422 - 450
  • [38] Using Multitask Learning Methods to Investigate Signal Peptides and Signal Anchors
    Zhang, Ning
    Gao, Shan
    Chen, Lei
    Ruan, Jishou
    CURRENT BIOINFORMATICS, 2013, 8 (05) : 533 - 538
  • [39] Learning rates for kernel-based expectile regression
    Farooq, Muhammad
    Steinwart, Ingo
    MACHINE LEARNING, 2019, 108 (02) : 203 - 227
  • [40] Optimal Learning Rates for Kernel Partial Least Squares
    Shao-Bo Lin
    Ding-Xuan Zhou
    Journal of Fourier Analysis and Applications, 2018, 24 : 908 - 933