Semi-supervised multiple empirical kernel learning with pseudo empirical loss and similarity regularization

被引:6
作者
Guo, Wei [1 ,2 ]
Wang, Zhe [1 ,2 ]
Ma, Menghao [1 ,2 ]
Chen, Lilong [2 ]
Yang, Hai [2 ]
Li, Dongdong [2 ]
Du, Wenli [1 ]
机构
[1] East China Univ Sci & Technol, Key Lab Smart Mfg Energy Chem Proc, Minist Educ, Shanghai, Peoples R China
[2] East China Univ Sci & Technol, Dept Comp Sci & Engn, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
machine learning; multiple empirical kernel learning; multiple kernel learning; semi-supervised learning; supervised learning; PREDICTION; MACHINES; GRAPH;
D O I
10.1002/int.22690
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multiple empirical kernel learning (MEKL) is a scalable and efficient supervised algorithm based on labeled samples. However, there is still a huge amount of unlabeled samples in the real-world application, which are not applicable for the supervised algorithm. To fully utilize the spatial distribution information of the unlabeled samples, this paper proposes a novel semi-supervised multiple empirical kernel learning (SSMEKL). SSMEKL enables multiple empirical kernel learning to achieve better classification performance with a small number of labeled samples and a large number of unlabeled samples. First, SSMEKL uses the collaborative information of multiple kernels to provide a pseudo labels to some unlabeled samples in the optimization process of the model, and SSMEKL designs pseudo-empirical loss to transform learning process of the unlabeled samples into supervised learning. Second, SSMEKL designs the similarity regularization for unlabeled samples to make full use of the spatial information of unlabeled samples. It is required that the output of unlabeled samples should be similar to the neighboring labeled samples to improve the classification performance of the model. The proposed SSMEKL can improve the performance of the classifier by using a small number of labeled samples and numerous unlabeled samples to improve the classification performance of MEKL. In the experiment, the results on four real-world data sets and two multiview data sets validate the effectiveness and superiority of the proposed SSMEKL.
引用
收藏
页码:1674 / 1696
页数:23
相关论文
共 56 条
[1]   EasyMKL: a scalable multiple kernel learning algorithm [J].
Aiolli, Fabio ;
Donini, Michele .
NEUROCOMPUTING, 2015, 169 :215-224
[2]  
Alpaydin E., 2008, Proceedings of the 25th International Conference on Machine Learning, P352
[3]  
[Anonymous], 2010, Proceedings of the 27th international conference on machine learning (ICML-10), DOI DOI 10.1111/J.1467-9868.2005.00532.X
[4]  
[Anonymous], 2007, UCI Machine Learning Repository
[5]   Generalized discriminant analysis using a kernel approach [J].
Baudat, G ;
Anouar, FE .
NEURAL COMPUTATION, 2000, 12 (10) :2385-2404
[6]  
Benavoli A, 2017, J MACH LEARN RES, V18
[7]   LIBSVM: A Library for Support Vector Machines [J].
Chang, Chih-Chung ;
Lin, Chih-Jen .
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2011, 2 (03)
[8]  
Chapelle Olivier, 2009, Handbook on Neural Information Processing [C], V20, P542, DOI [DOI 10.1109/TNN.2009.2015974, 10.1109/TNN.2009.2015974]
[9]  
Chen T., 2016, Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min, Vvol. 22
[10]   Empirical kernel map-based multilayer extreme learning machines for representation learning [J].
Chi-Man Vong ;
Chen, Chuangquan ;
Wong, Pak-Kin .
NEUROCOMPUTING, 2018, 310 :265-276