Multiview Learning With Robust Double-Sided Twin SVM

被引:170
作者
Ye, Qiaolin [1 ]
Huang, Peng [1 ]
Zhang, Zhao [2 ]
Zheng, Yuhui [3 ]
Fu, Liyong [4 ]
Yang, Wankou [5 ]
机构
[1] Nanjing Forestry Univ, Coll Informat Sci & Technol, Nanjing 210037, Peoples R China
[2] Hefei Univ Technol, Key Lab Knowledge Engn Big Data & Intelligent Int, Minist Educ, Hefei 230009, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Jiangsu Engn Ctr Network Monitoring, Nanjing 210044, Peoples R China
[4] Chinese Acad Forestry, Inst Forest Resource Informat Techn, Beijing 100091, Peoples R China
[5] Southeast Univ, Sch Automat, Nanjing 210096, Peoples R China
基金
美国国家科学基金会;
关键词
Support vector machines; Eigenvalues and eigenfunctions; Robustness; Task analysis; Standards; Minimization; Linear programming; Double-sided constraints; multiplane support vector machine (SVM); multiview classification; outlier robustness;
D O I
10.1109/TCYB.2021.3088519
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multiview learning (MVL), which enhances the learners' performance by coordinating complementarity and consistency among different views, has attracted much attention. The multiview generalized eigenvalue proximal support vector machine (MvGSVM) is a recently proposed effective binary classification method, which introduces the concept of MVL into the classical generalized eigenvalue proximal support vector machine (GEPSVM). However, this approach cannot guarantee good classification performance and robustness yet. In this article, we develop multiview robust double-sided twin SVM (MvRDTSVM) with SVM-type problems, which introduces a set of double-sided constraints into the proposed model to promote classification performance. To improve the robustness of MvRDTSVM against outliers, we take L1-norm as the distance metric. Also, a fast version of MvRDTSVM (called MvFRDTSVM) is further presented. The reformulated problems are complex, and solving them are very challenging. As one of the main contributions of this article, we design two effective iterative algorithms to optimize the proposed nonconvex problems and then conduct theoretical analysis on the algorithms. The experimental results verify the effectiveness of our proposed methods.
引用
收藏
页码:12745 / 12758
页数:14
相关论文
共 59 条
[1]  
[Anonymous], 2005, PROC ICML WORKSHOP L
[2]  
Boyd S., 2003, lecture notes of EE392o, V2004, P2004
[3]   Weighted Least Squares Twin Support Vector Machines for Pattern Classification [J].
Chen, Jing ;
Ji, Guangrong .
2010 2ND INTERNATIONAL CONFERENCE ON COMPUTER AND AUTOMATION ENGINEERING (ICCAE 2010), VOL 2, 2010, :242-246
[4]   Improved multi-view GEPSVM via Inter-View Difference Maximization and Intra-view Agreement Minimization [J].
Cheng, Yawen ;
Yin, Hang ;
Ye, Qiaolin ;
Huang, Peng ;
Fu, Liyong ;
Yang, Zhangjing ;
Tian, Yuan .
NEURAL NETWORKS, 2020, 125 :313-329
[5]  
Cristianini N., 2000, An Introduction to Support Vector Machines and Other Kernel-basedLearning Methods, DOI DOI 10.1017/CBO9780511801389
[6]  
Dua D., 2019, Uci machine learning repository
[7]  
Farquhar J.D. R., 2005, NIPS
[8]  
Fung G., 2001, KDD-2001. Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, P77, DOI 10.1145/502512.502527
[9]   Multi-View Linear Discriminant Analysis Network [J].
Hu, Peng ;
Peng, Dezhong ;
Sang, Yongsheng ;
Xiang, Yong .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (11) :5352-5365
[10]  
Huang Zijie, 2020, Advances in Neural Information Processing Systems, V33