Large-Margin Predictive Latent Subspace Learning for Multiview Data Analysis

被引:93
作者
Chen, Ning [1 ]
Zhu, Jun [1 ]
Sun, Fuchun [1 ]
Xing, Eric Poe [2 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Tsinghua Natl Lab Informat Sci & Technol, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
[2] Carnegie Mellon Univ, Sch Comp Sci, Pittsburgh, PA 15213 USA
基金
美国国家科学基金会;
关键词
Latent subspace model; large-margin learning; classification; regression; image retrieval and annotation;
D O I
10.1109/TPAMI.2012.64
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning salient representations of multiview data is an essential step in many applications such as image classification, retrieval, and annotation. Standard predictive methods, such as support vector machines, often directly use all the features available without taking into consideration the presence of distinct views and the resultant view dependencies, coherence, and complementarity that offer key insights to the semantics of the data, and are therefore offering weak performance and are incapable of supporting view-level analysis. This paper presents a statistical method to learn a predictive subspace representation underlying multiple views, leveraging both multiview dependencies and availability of supervising side-information. Our approach is based on a multiview latent subspace Markov network (MN) which fulfills a weak conditional independence assumption that multiview observations and response variables are conditionally independent given a set of latent variables. To learn the latent subspace MN, we develop a large-margin approach which jointly maximizes data likelihood and minimizes a prediction loss on training data. Learning and inference are efficiently done with a contrastive divergence method. Finally, we extensively evaluate the large-margin latent MN on real image and hotel review datasets for classification, regression, image annotation, and retrieval. Our results demonstrate that the large-margin approach can achieve significant improvements in terms of prediction performance and discovering predictive latent subspace representations.
引用
收藏
页码:2365 / 2378
页数:14
相关论文
共 51 条
[1]  
Akaho S., 2001, INT M PSYCH SOC, P263
[2]  
Ando K., 2007, P INT C MACH LEARN
[3]  
[Anonymous], P SIAM C DAT MIN
[4]  
[Anonymous], P INT C MACH LEARN
[5]  
[Anonymous], 2003, P 26 ANN INT ACM SIG
[6]  
[Anonymous], P EUR C MACH LEARN
[7]  
[Anonymous], 2001, Journal of Machine Learning Research
[8]  
[Anonymous], P IEEE C COMP VIS PA
[9]  
[Anonymous], P INT C MACH LEARN
[10]  
[Anonymous], 2004, Advances in Neural Information Processing Systems