Multi-View Projection Learning via Adaptive Graph Embedding for Dimensionality Reduction

被引:0
作者
Li, Haohao [1 ]
Gao, Mingliang [2 ]
Wang, Huibing [3 ]
Jeon, Gwanggil [1 ,4 ]
机构
[1] Zhejiang Sci Tech Univ, Dept Math, Hangzhou 310018, Peoples R China
[2] Shandong Univ Technol, Sch Elect & Elect Engn, Zibo 255000, Peoples R China
[3] Dalian Maritime Univ, Coll Informat & Sci Technol, Dalian 116021, Peoples R China
[4] Incheon Natl Univ, Dept Embedded Syst Engn, Incheon 22012, South Korea
关键词
multi-view learning; dimensionality reduction; graph learning; self-weighted learning; SCALE;
D O I
10.3390/electronics12132934
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In order to explore complex structures and relationships hidden in data, plenty of graph-based dimensionality reduction methods have been widely investigated and extended to the multi-view learning field. For multi-view dimensionality reduction, the key point is extracting the complementary and compatible multi-view information to analyze the complex underlying structure of the samples, which is still a challenging task. We propose a novel multi-view dimensionality reduction algorithm that integrates underlying structure learning and dimensionality reduction for each view into one framework. Because the prespecified graph derived from original noisy high-dimensional data is usually low-quality, the subspace constructed based on such a graph is also low-quality. To obtain the optimal graph for dimensionality reduction, we propose a framework that learns the affinity based on the low-dimensional representation of all views and performs the dimensionality reduction based on it jointly. Although original data is noisy, the local structure information of them is also valuable. Therefore, in the graph learning process, we also introduce the information of predefined graphs based on each view feature into the optimal graph. Moreover, assigning the weight to each view based on its importance is essential in multi-view learning, the proposed GoMPL automatically allocates an appropriate weight to each view in the graph learning process. The obtained optimal graph is then adopted to learn the projection matrix for each individual view by graph embedding. We provide an effective alternate update method for learning the optimal graph and optimal subspace jointly for each view. We conduct many experiments on various benchmark datasets to evaluate the effectiveness of the proposed method.
引用
收藏
页数:14
相关论文
共 42 条
[1]  
[Anonymous], 2010, P C DAT MIN DAT WAR
[2]  
Balakrishnama S., 1998, Inst. Signal Inf. Process, V18, P1, DOI 10.1109/IJCNN.2000.861335
[3]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[4]   Multi-Level Dense Descriptor and Hierarchical Feature Matching for Copy-Move Forgery Detection [J].
Bi, Xiuli ;
Pun, Chi-Man ;
Yuan, Xiao-Chen .
INFORMATION SCIENCES, 2016, 345 :226-242
[5]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893
[6]   Low-Rank Common Subspace for Multi-view Learning [J].
Ding, Zhengming ;
Fu, Yun .
2014 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2014, :110-119
[7]  
Guo J, 2018, AAAI CONF ARTIF INTE, P298
[8]   Canonical correlation analysis: An overview with application to learning methods [J].
Hardoon, DR ;
Szedmak, S ;
Shawe-Taylor, J .
NEURAL COMPUTATION, 2004, 16 (12) :2639-2664
[9]  
He XF, 2005, IEEE I CONF COMP VIS, P1208
[10]  
He XF, 2004, ADV NEUR IN, V16, P153