Multi-View Multi-Label Learning With Sparse Feature Selection for Image Annotation

被引:478
作者
Zhang, Yongshan [1 ]
Wu, Jia [2 ]
Cai, Zhihua [1 ]
Yu, Philip S. [3 ,4 ]
机构
[1] China Univ Geosci, Sch Comp Sci, Wuhan 430074, Peoples R China
[2] Macquarie Univ, Fac Sci & Engn, Dept Comp, Sydney, NSW 2109, Australia
[3] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
[4] Tsinghua Univ, Inst Data Sci, Beijing 100084, Peoples R China
基金
中国国家自然科学基金; 澳大利亚研究理事会;
关键词
Feature extraction; Correlation; Noise measurement; Kernel; Learning systems; Computer science; Task analysis; Feature selection; sparse learning; multi-view learning; multi-label learning; image annotation; UNSUPERVISED FEATURE-SELECTION; INFORMATION;
D O I
10.1109/TMM.2020.2966887
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In image analysis, image samples are always represented by multiple view features and associated with multiple class labels for better interpretation. However, multiple view data may include noisy, irrelevant and redundant features, while multiple class labels can be noisy and incomplete. Due to the special data characteristic, it is hard to perform feature selection on multi-view multi-label data. To address these challenges, in this paper, we propose a novel multi-view multi-label sparse feature selection (MSFS) method, which exploits both view relations and label correlations to select discriminative features for further learning. Specifically, the multi-labeled information is decomposed into a reduced latent label representation to capture higher level concepts and correlations among multiple labels. Multiple local geometric structures are constructed to exploit visual similarities and relations for different views. By taking full advantage of the latent label representation and multiple local geometric structures, the sparse regression model with an l2,1-norm and an Frobenius norm (F-norm) penalty terms is utilized to perform hierarchical feature selection, where the F-norm penalty performs high-level (i.e., view-wise) feature selection to preserve the informative views and the l2,1-norm penalty conducts low-level (i.e., rowwise) feature selection to remove noisy features. To solve the proposed formulation, we also devise a simple yet efficient iterative algorithm. Experiments and comparisons on real-world image datasets demonstrate the effectiveness and potential of MSFS.
引用
收藏
页码:2844 / 2857
页数:14
相关论文
共 56 条
[1]  
[Anonymous], 2012, NIPS
[2]  
Bertsekas D. P., 1999, NONLINEAR PROGRAMMIN, P22
[3]  
Blum A., 1998, Proceedings of the Eleventh Annual Conference on Computational Learning Theory, P92, DOI 10.1145/279943.279962
[4]  
Cai Deng, 2010, P 16 ACM SIGKDD INT, P333, DOI DOI 10.1145/1835804.1835848
[5]  
Nguyen CT, 2014, AAAI CONF ARTIF INTE, P2013
[6]   Semisupervised Feature Analysis by Mining Correlations Among Multiple Tasks [J].
Chang, Xiaojun ;
Yang, Yi .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (10) :2294-2305
[7]  
Chang XJ, 2014, AAAI CONF ARTIF INTE, P1171
[8]   Document transformation for multi-label feature selection in text categorization [J].
Chen, Weizhu ;
Yan, Jun ;
Zhang, Benyu ;
Chen, Zheng ;
Yang, Qiang .
ICDM 2007: PROCEEDINGS OF THE SEVENTH IEEE INTERNATIONAL CONFERENCE ON DATA MINING, 2007, :451-+
[9]   Unsupervised Feature Selection with Adaptive Structure Learning [J].
Du, Liang ;
Shen, Yi-Dong .
KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, :209-218
[10]  
Dumais ST, 2004, ANNU REV INFORM SCI, V38, P189