Multi-Label Image Categorization With Sparse Factor Representation

被引:67
作者
Sun, Fuming [1 ]
Tang, Jinhui [2 ]
Li, Haojie [3 ]
Qi, Guo-Jun [4 ]
Huang, Thomas S. [4 ]
机构
[1] Liaoning Univ Technol, Sch Elect & Informat Engn, Liaoning 110036, Peoples R China
[2] Nanjing Univ Sci & Technol, Nanjing 210044, Jiangsu, Peoples R China
[3] Dalian Univ Technol, Dalian 116024, Peoples R China
[4] Univ Illinois, Beckman Inst Adv Sci & Technol, Urbana, IL 61801 USA
基金
中国国家自然科学基金;
关键词
Image categorization; multilabel; sparse; CLASSIFICATION; ANNOTATION; SELECTION;
D O I
10.1109/TIP.2014.2298978
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of multilabel classification is to reveal the underlying label correlations to boost the accuracy of classification tasks. Most of the existing multilabel classifiers attempt to exhaustively explore dependency between correlated labels. It increases the risk of involving unnecessary label dependencies, which are detrimental to classification performance. Actually, not all the label correlations are indispensable to multilabel model. Negligible or fragile label correlations cannot be generalized well to the testing data, especially if there exists label correlation discrepancy between training and testing sets. To minimize such negative effect in the multilabel model, we propose to learn a sparse structure of label dependency. The underlying philosophy is that as long as the multilabel dependency cannot be well explained, the principle of parsimony should be applied to the modeling process of the label correlations. The obtained sparse label dependency structure discards the outlying correlations between labels, which makes the learned model more generalizable to future samples. Experiments on real world data sets show the competitive results compared with existing algorithms.
引用
收藏
页码:1028 / 1037
页数:10
相关论文
共 39 条
[1]  
[Anonymous], 2006, BOOK REV IEEE T NEUR
[2]  
[Anonymous], 2009, P ACM INT C IM VID R
[3]   Mirror descent and nonlinear projected subgradient methods for convex optimization [J].
Beck, A ;
Teboulle, M .
OPERATIONS RESEARCH LETTERS, 2003, 31 (03) :167-175
[4]   Latent Dirichlet allocation [J].
Blei, DM ;
Ng, AY ;
Jordan, MI .
JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) :993-1022
[5]  
Bosch A, 2006, LECT NOTES COMPUT SC, V3954, P517
[6]  
Bucak Serhat, 2010, NIPS, P325
[7]  
Chen G, 2009, PROC CVPR IEEE, P1658, DOI 10.1109/CVPRW.2009.5206813
[8]   Semi-Supervised Learning via Regularized Boosting Working on Multiple Semi-Supervised Assumptions [J].
Chen, Ke ;
Wang, Shihai .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (01) :129-143
[9]  
Duchi J., 2008, P INT C MACH LEARN I, V25, P272
[10]  
HAFFARI G, 2006, SURVEY INDUCTIVE SEM