Local Rademacher Complexity-based Learning Guarantees for Multi-Task Learning

被引:0
作者
Yousefi, Niloofar [1 ]
Lei, Yunwen [2 ]
Kloft, Marius [3 ]
Mollaghasemi, Mansooreh [4 ]
Anagnostopoulos, Georgios C. [5 ]
机构
[1] Univ Cent Florida, Dept Elect Engn & Comp Sci, Orlando, FL 32816 USA
[2] Southern Univ Sci & Technol, Shenzhen Key Lab Computat Intelligence, Dept Comp Sci & Engn, 1088 Xueyuan Ave, Shenzhen 518055, Guangdong, Peoples R China
[3] Tech Univ Kaiserslautern, Dept Comp Sci, D-67653 Kaiserslautern, Germany
[4] Univ Cent Florida, Dept Ind Engn & Management Syst, Orlando, FL 32816 USA
[5] Florida Inst Technol, Dept Elect & Comp Engn, Melbourne, FL 32901 USA
基金
美国国家科学基金会;
关键词
Excess Risk Bounds; Local Rademacher Complexity; Multi-task Learning; MULTIPLE TASKS; RISK; INEQUALITIES; ALGORITHM; BOUNDS; ERROR; MODEL;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), with which we establish sharp excess risk bounds for MTL in terms of the Local Rademacher Complexity (LRC). We also give a new bound on the LRC for any norm regularized hypothesis classes, which applies not only to MTL, but also to the standard Single-Task Learning (STL) setting. By combining both results, one can easily derive fast-rate bounds on the excess risk for many prominent MTL methods, including-as we demonstrate-Schatten norm, group norm, and graph regularized MTL. The derived bounds reflect a relationship akin to a conservation law of asymptotic convergence rates. When compared to the rates obtained via a traditional, global Rademacher analysis, this very relationship allows for trading off slower rates with respect to the number of tasks for faster rates with respect to the number of available samples per task.
引用
收藏
页数:47
相关论文
共 70 条
  • [1] An Q., 2008, P 25 INT C MACH LEAR, P17
  • [2] Ando RK, 2005, J MACH LEARN RES, V6, P1817
  • [3] [Anonymous], 2007, Multi-Task Feature Learning, DOI DOI 10.7551/MITPRESS/7503.003.0010
  • [4] [Anonymous], 2010, Proc. of the 27th International Conference onMachine Learning
  • [5] [Anonymous], 2012, NIPS
  • [6] [Anonymous], 2013, Advances in neural information processing systems
  • [7] [Anonymous], 2013, Concentration Inequali-ties: A Nonasymptotic Theory of Independence, DOI DOI 10.1093/ACPROF:OSO/9780199535255.001.0001
  • [8] [Anonymous], 2014, C LEARNING THEORY
  • [9] [Anonymous], ARXIV12066417
  • [10] Argyriou A., 2007, Advances in Neural Information Processing Systems