Distributed multi-label feature selection using individual mutual information measures

被引:95
作者
Gonzalez-Lopez, Jorge [1 ]
Ventura, Sebastian [2 ]
Cano, Alberto [1 ]
机构
[1] Virginia Commonwealth Univ, Dept Comp Sci, Richmond, VA 23284 USA
[2] Univ Cordoba, Dept Comp Sci & Numer Anal, Cordoba, Spain
关键词
Multi-label learning; Feature selection; Mutual information; Distributed computing; Apache spark; CLASSIFICATION; TRANSFORMATION; ALGORITHM; SPARK; KNN;
D O I
10.1016/j.knosys.2019.105052
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-label learning generalizes traditional learning by allowing an instance to belong to multiple labels simultaneously. This causes multi-label data to be characterized by its large label space dimensionality and the dependencies among labels. These challenges have been addressed by feature selection techniques which improve the final model accuracy. However, the large number of features along with a large number of labels call for new approaches to manage data effectively and efficiently in distributed computing environments. This paper proposes a distributed model to compute a score that measures the quality of each feature with respect to multiple labels on Apache Spark. We propose two different approaches that study how to aggregate the mutual information of multiple labels: Euclidean Norm Maximization (ENM) and Geometric Mean Maximization (GMM). The former selects the features with the largest L-2-norm whereas the latter selects the features with the largest geometric mean. Experiments compare 9 distributed multi-label feature selection methods on 12 datasets and 12 metrics. Results validated through statistical analysis indicate that ENM is able to outperform the reference methods by maximizing the relevance while minimizing the redundancy of the selected features in constant selection time. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文
共 65 条
[51]  
Wang XZ, 2018, INT CONF CLOUD COMPU, P84, DOI 10.1109/CCIS.2018.8691196
[52]  
Weston J, 2001, ADV NEUR IN, V13, P668
[53]  
White T., 2012, HADOOP DEFINITIVE GU
[54]   Multi-label regularized quadratic programming feature selection algorithm with Frank-Wolfe method [J].
Xu, Jianhua ;
Ma, Quan .
EXPERT SYSTEMS WITH APPLICATIONS, 2018, 95 :14-31
[55]   A multi-label feature extraction algorithm via maximizing feature variance and feature-label dependence simultaneously [J].
Xu, Jianhua ;
Liu, Jiali ;
Yin, Jing ;
Sun, Chengyu .
KNOWLEDGE-BASED SYSTEMS, 2016, 98 :172-184
[56]   Multi-label learning with label-specific feature reduction [J].
Xu, Suping ;
Yang, Xibei ;
Yu, Hualong ;
Yu, Dong-Jun ;
Yang, Jingyu ;
Tsang, Eric C. C. .
KNOWLEDGE-BASED SYSTEMS, 2016, 104 :52-61
[57]   Sparse structural feature selection for multitarget regression [J].
Yuan, Haoliang ;
Zheng, Junjie ;
Lai, Loi Lei ;
Tang, Yuan Yan .
KNOWLEDGE-BASED SYSTEMS, 2018, 160 :200-209
[58]   Apache Spark: A Unified Engine for Big Data Processing [J].
Zaharia, Matei ;
Xin, Reynold S. ;
Wendell, Patrick ;
Das, Tathagata ;
Armbrust, Michael ;
Dave, Ankur ;
Meng, Xiangrui ;
Rosen, Josh ;
Venkataraman, Shivaram ;
Franklin, Michael J. ;
Ghodsi, Ali ;
Gonzalez, Joseph ;
Shenker, Scott ;
Stoica, Ion .
COMMUNICATIONS OF THE ACM, 2016, 59 (11) :56-65
[59]   Manifold regularized discriminative feature selection for multi-label learning [J].
Zhang, Jia ;
Luo, Zhiming ;
Li, Candong ;
Zhou, Changen ;
Li, Shaozi .
PATTERN RECOGNITION, 2019, 95 :136-150
[60]   ML-KNN: A lazy learning approach to multi-label leaming [J].
Zhang, Min-Ling ;
Zhou, Zhi-Hua .
PATTERN RECOGNITION, 2007, 40 (07) :2038-2048