Multitask Learning for Classification Problem via New Tight Relaxation of Rank Minimization

被引:6
作者
Chang, Wei [1 ,2 ]
Nie, Feiping [2 ,3 ]
Zhi, Yijie [4 ]
Wang, Rong [2 ,5 ]
Li, Xuelong [2 ,3 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
[2] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Shaanxi, Peoples R China
[3] Northwestern Polytech Univ, Key Lab Intelligent Interact & Applicat, Minist Ind & Informat Technol, Xian 710072, Shaanxi, Peoples R China
[4] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Zhejiang, Peoples R China
[5] Northwestern Polytech Univ, Sch Cybersecur, Xian 710072, Peoples R China
基金
中国国家自然科学基金;
关键词
Low-rank constraint; multiclass classification; multitask learning (MTL); new tight relaxations; reweighted method; FEATURE-SELECTION; REGRESSION;
D O I
10.1109/TNNLS.2021.3132918
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multitask learning (MTL) is a joint learning paradigm, which fuses multiple related tasks together to achieve the better performance than single-task learning methods. It has been observed by many researchers that different tasks with certain similarities share a low-dimensional common yet latent subspace. In order to get the low- rank structure shared across tasks, trace norm has been used as a convex relaxation of the rank minimization problem. However, trace norm is not a tight approximation for the rank function. To address this important issue, we propose two novel regularization-based models to approximate the rank minimization problem by minimizing the k minimal singular values. For our new models, if the minimal singular values are suppressed to zeros, the rank would also be reduced. Compared with the standard trace norm, our new regularization-based models are the tighter approximations, which can help our models capture the low-dimensional subspace among multiple tasks better. Besides, it is an NP-hard problem to directly solve the exact rank minimization problem for our models. In this article, we proposed two simple but effective strategies to optimize our models, which tactically solves the exact rank minimization problem by setting a large penalizing parameter. Experimental results performed on synthetic and real-world benchmark datasets demonstrate that the proposed models have the ability of learning the low-rank structure shared across tasks and the better performance than other classical MTL methods.
引用
收藏
页码:6055 / 6068
页数:14
相关论文
共 50 条