Distributed multi-label feature selection using individual mutual information measures

被引:95
作者
Gonzalez-Lopez, Jorge [1 ]
Ventura, Sebastian [2 ]
Cano, Alberto [1 ]
机构
[1] Virginia Commonwealth Univ, Dept Comp Sci, Richmond, VA 23284 USA
[2] Univ Cordoba, Dept Comp Sci & Numer Anal, Cordoba, Spain
关键词
Multi-label learning; Feature selection; Mutual information; Distributed computing; Apache spark; CLASSIFICATION; TRANSFORMATION; ALGORITHM; SPARK; KNN;
D O I
10.1016/j.knosys.2019.105052
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-label learning generalizes traditional learning by allowing an instance to belong to multiple labels simultaneously. This causes multi-label data to be characterized by its large label space dimensionality and the dependencies among labels. These challenges have been addressed by feature selection techniques which improve the final model accuracy. However, the large number of features along with a large number of labels call for new approaches to manage data effectively and efficiently in distributed computing environments. This paper proposes a distributed model to compute a score that measures the quality of each feature with respect to multiple labels on Apache Spark. We propose two different approaches that study how to aggregate the mutual information of multiple labels: Euclidean Norm Maximization (ENM) and Geometric Mean Maximization (GMM). The former selects the features with the largest L-2-norm whereas the latter selects the features with the largest geometric mean. Experiments compare 9 distributed multi-label feature selection methods on 12 datasets and 12 metrics. Results validated through statistical analysis indicate that ENM is able to outperform the reference methods by maximizing the relevance while minimizing the redundancy of the selected features in constant selection time. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文
共 65 条
[1]  
[Anonymous], 2013, IBEROAMERICAN C PATT, DOI DOI 10.1007/978-3-642-41827-3.66
[2]  
[Anonymous], 2008, ISMIR
[3]  
[Anonymous], [No title captured]
[4]   Configuring in-memory cluster computing using random forest [J].
Bei, Zhendong ;
Yu, Zhibin ;
Luo, Ni ;
Jiang, Chuntao ;
Xu, Chengzhong ;
Feng, Shengzhong .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2018, 79 :1-15
[5]   LAIM discretization for multi-label data [J].
Cano, Alberto ;
Maria Luna, Jose ;
Gibaja, Eva L. ;
Ventura, Sebastian .
INFORMATION SCIENCES, 2016, 330 :370-384
[6]   A survey on feature selection methods [J].
Chandrashekar, Girish ;
Sahin, Ferat .
COMPUTERS & ELECTRICAL ENGINEERING, 2014, 40 (01) :16-28
[7]   Extended adaptive Lasso for multi-class and multi-label feature selection [J].
Chen, Si-Bao ;
Zhang, Yu-Mei ;
Ding, Chris H. Q. ;
Zhang, Jian ;
Luo, Bin .
KNOWLEDGE-BASED SYSTEMS, 2019, 173 :28-36
[8]   Document transformation for multi-label feature selection in text categorization [J].
Chen, Weizhu ;
Yan, Jun ;
Zhang, Benyu ;
Chen, Zheng ;
Yang, Qiang .
ICDM 2007: PROCEEDINGS OF THE SEVENTH IEEE INTERNATIONAL CONFERENCE ON DATA MINING, 2007, :451-+
[9]   Mapreduce: Simplified data processing on large clusters [J].
Dean, Jeffrey ;
Ghemawat, Sanjay .
COMMUNICATIONS OF THE ACM, 2008, 51 (01) :107-113
[10]   Identification of mitochondrial proteins of malaria parasite using analysis of variance [J].
Ding, Hui ;
Li, Dongmei .
AMINO ACIDS, 2015, 47 (02) :329-333