Feature ranking for semi-supervised learning

被引:6
作者
Petkovic, Matej [1 ,2 ]
Dzeroski, Saso [1 ,2 ]
Kocev, Dragi [1 ,2 ]
机构
[1] Jozef Stefan Inst, Jamova 39, Ljubljana 1000, Slovenia
[2] Jozef Stefan Int Postgrad Sch, Jamova 39, Ljubljana 1000, Slovenia
关键词
Feature ranking; Semi-supervised learning; Tree ensembles; Relief; Structured output prediction; Multi-target prediction; FEATURE-SELECTION; CLASSIFICATION; TREES;
D O I
10.1007/s10994-022-06181-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The data used for analysis are becoming increasingly complex along several directions: high dimensionality, number of examples and availability of labels for the examples. This poses a variety of challenges for the existing machine learning methods, related to analyzing datasets with a large number of examples that are described in a high-dimensional space, where not all examples have labels provided. For example, when investigating the toxicity of chemical compounds, there are many compounds available that can be described with information-rich high-dimensional representations, but not all of the compounds have information on their toxicity. To address these challenges, we propose methods for semi-supervised learning (SSL) of feature rankings. The feature rankings are learned in the context of classification and regression, as well as in the context of structured output prediction (multi-label classification, MLC, hierarchical multi-label classification, HMLC and multi-target regression, MTR) tasks. This is the first work that treats the task of feature ranking uniformly across various tasks of semi-supervised structured output prediction. To the best of our knowledge, it is also the first work on SSL of feature rankings for the tasks of HMLC and MTR. More specifically, we propose two approaches-based on predictive clustering tree ensembles and the Relief family of algorithms-and evaluate their performance across 38 benchmark datasets. The extensive evaluation reveals that rankings based on Random Forest ensembles perform the best for classification tasks (incl. MLC and HMLC tasks) and are the fastest for all tasks, while ensembles based on extremely randomized trees work best for the regression tasks. Semi-supervised feature rankings outperform their supervised counterparts across the majority of datasets for all of the different tasks, showing the benefit of using unlabeled in addition to labeled data.
引用
收藏
页码:4379 / 4408
页数:30
相关论文
共 68 条
[61]   Feature selection for high-dimensional temporal data [J].
Tsagris, Michail ;
Lagani, Vincenzo ;
Tsamardinos, Ioannis .
BMC BIOINFORMATICS, 2018, 19
[62]   A bias-variance analysis of a real world learning problem: The CoIL Challenge 2000 [J].
Van der Putten, P ;
Van Someren, M .
MACHINE LEARNING, 2004, 57 (1-2) :177-195
[63]   Decision trees for hierarchical multi-label classification [J].
Vens, Celine ;
Struyf, Jan ;
Schietgat, Leander ;
Dzeroski, Saso ;
Blockeel, Hendrik .
MACHINE LEARNING, 2008, 73 (02) :185-214
[64]   Semi-supervised multi-label feature selection via label correlation analysis with l1-norm graph embedding [J].
Wang, Xiao-dong ;
Chen, Rung-Ching ;
Hong, Chao-qun ;
Zeng, Zhi-qiang ;
Zhou, Zhi-li .
IMAGE AND VISION COMPUTING, 2017, 63 :10-23
[65]  
Xiaojun Chang, 2014, Advances in Knowledge Discovery and Data Mining. 18th Pacific-Asia Conference, PAKDD 2014. Proceedings: LNCS 8444, P74, DOI 10.1007/978-3-319-06605-9_7
[66]   METHODS OF COMBINING MULTIPLE CLASSIFIERS AND THEIR APPLICATIONS TO HANDWRITING RECOGNITION [J].
XU, L ;
KRZYZAK, A ;
SUEN, CY .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1992, 22 (03) :418-435
[67]   Feature Selection Method Based on High-Resolution Remote Sensing Images and the Effect of Sensitive Features on Classification Accuracy [J].
Zhou, Yi ;
Zhang, Rui ;
Wang, Shixin ;
Wang, Futao .
SENSORS, 2018, 18 (07)
[68]  
Zhu X., 2009, INTRO SEMISUPERVISED, DOI [10.1007/978-3-031-01548-9, DOI 10.1007/978-3-031-01548-9]