Expand globally, shrink locally: Discriminant multi-label learning with missing labels

被引:52
作者
Ma, Zhongchen [1 ,2 ]
Chen, Songcan [2 ,3 ]
机构
[1] Jiangsu Univ, Sch Comp Sci & Commun Engn, Zhenjiang 212013, Jiangsu, Peoples R China
[2] Nanjing Univ Aeronaut & Astronaut NUAA, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[3] MIIT Key Lab Pattern Anal & Machine Intelligence, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-label learning; Missing labels; Local low-rank label structure; Global low-rank label structure; Label discriminant information; CLASSIFICATION;
D O I
10.1016/j.patcog.2020.107675
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In multi-label learning, the issue of missing labels brings a major challenge. Many methods attempt to recovery missing labels by exploiting low-rank structure of label matrix. However, these methods just utilize global low-rank label structure, ignore both local low-rank label structures and label discriminant information to some extent, leaving room for further performance improvement. In this paper, we develop a simple yet effective discriminant multi-label learning (DM2L) method for multi-label learning with missing labels. Specifically, we impose the low-rank structures on all the predictions of instances from the same labels (local shrinking of rank), and a maximally separated structure (high-rank structure) on the predictions of instances from different labels (global expanding of rank). In this way, these imposed low-rank structures can help modeling both local and global low-rank label structures, while the imposed high-rank structure can help providing more underlying discriminability. Our subsequent theoretical analysis also supports these intuitions. In addition, we provide a nonlinear extension via using kernel trick to enhance DM2L and establish a concave-convex objective to learn these models. Compared to the other methods, our method involves the fewest assumptions and only one hyper-parameter. Even so, extensive experiments show that our method still outperforms the state-of-the-art methods. (C) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页数:10
相关论文
共 40 条
[11]   Multi-label active learning by model guided distribution matching [J].
Gao, Nengneng ;
Huang, Sheng-Jun ;
Chen, Songcan .
FRONTIERS OF COMPUTER SCIENCE, 2016, 10 (05) :845-855
[12]  
Goldberg A B, 2010, P INT C NEUR INF PRO, V23, P757
[13]   Improving multi-label classification with missing labels by learning label-specific features [J].
Huang, Jun ;
Qin, Feng ;
Zheng, Xiao ;
Cheng, Zekai ;
Yuan, Zhixiang ;
Zhang, Weigang ;
Huang, Qingming .
INFORMATION SCIENCES, 2019, 492 :124-146
[14]  
Huang S.-J., 2012, 26 AAAI C ART INT
[15]  
Huang SJ, 2015, PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), P946
[16]  
Lezama J., 2018, IEEE C COMP VIS PATT
[17]   Image Tag Completion via Image-Specific and Tag-Specific Linear Sparse Reconstructions [J].
Lin, Zijia ;
Ding, Guiguang ;
Hu, Mingqing ;
Wang, Jianmin ;
Ye, Xiaojun .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :1618-1625
[18]  
Liu M., 2015, 29 AAAI C ART INT
[19]   SVM based multi-label learning with missing labels for image annotation [J].
Liu, Yang ;
Wen, Kaiwen ;
Gao, Quanxue ;
Gao, Xinbo ;
Nie, Feiping .
PATTERN RECOGNITION, 2018, 78 :307-317
[20]  
Qiu Q, 2015, J MACH LEARN RES, V16, P187