GMDH-based semi-supervised feature selection for customer classification

被引:44
作者
Xiao, Jin [1 ,2 ]
Cao, Hanwen [1 ]
Jiang, Xiaoyi [3 ]
Gu, Xin [1 ,2 ]
Xie, Ling [1 ,2 ]
机构
[1] Sichuan Univ, Business Sch, Chengdu 610064, Sichuan, Peoples R China
[2] Sichuan Univ, Soft Sci Inst, Chengdu 610064, Sichuan, Peoples R China
[3] Univ Munster, Dept Math & Comp Sci, Einsteinstr 62, D-48149 Munster, Germany
基金
中国国家自然科学基金;
关键词
Feature selection; Group method of data handling (GMDH); Customer classification; Semi-supervised learning; CHURN PREDICTION; OBJECT DETECTION; NEURAL-NETWORKS; ALGORITHMS; CONSTRAINT; RELEVANCE; SYSTEM;
D O I
10.1016/j.knosys.2017.06.018
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Data dimension reduction is an important step for customer classification modeling, and feature selection has been a research focus of the data dimension reduction field. This study introduces the group method of data handling (GMDH), puts forward a GMDH-based semi-supervised feature selection (GMDH-SSFS) algorithm, and applies it to customer feature selection. The algorithm can utilize a few samples with class labels L, and a large number of samples without class labels U simultaneously. What is more, it considers the relationship between features and class labels, and that between features during feature selection. The GMDH-SSFS model mainly consists of three stages: 1) Train N basic classification models based on the dataset L with class labels; 2) Label samples selectively in the dataset U without class labels, and add them to L; 3) Train the GMDH neural network based on the new training set L, and select the optimal feature subset Fs. Based on an empirical analysis of four customer classification datasets, results suggest that the features selected by the GMDH-SSFS model have a good explainability. Meanwhile, the customer classification performance of the classification model trained by the selected feature subset is superior to that of the models trained by the commonly used Laplacian score (an unsupervised feature selection algorithm), Fisher score (a supervised feature selection algorithm), and the FW-SemiFS and S3VM-FS (two semi-supervised feature selection algorithms). (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:236 / 248
页数:13
相关论文
共 60 条
[21]  
Ivakhnenko A. G., 2000, Pattern Recognition and Image Analysis, V10, P187
[22]  
Ivakhnenko A. G., 1976, Soviet Automatic Control, V9, P21
[23]  
Jolliffe I.T., 2002, Principal Component Analysis
[24]   Wrappers for feature subset selection [J].
Kohavi, R ;
John, GH .
ARTIFICIAL INTELLIGENCE, 1997, 97 (1-2) :273-324
[25]  
Kubat M, 1997, LECT NOTES ARTIF INT, V1224, P146
[26]   Deep learning [J].
LeCun, Yann ;
Bengio, Yoshua ;
Hinton, Geoffrey .
NATURE, 2015, 521 (7553) :436-444
[27]   The effect of feature selection on financial distress prediction [J].
Liang, Deron ;
Tsai, Chih-Fong ;
Wu, Hsin-Ting .
KNOWLEDGE-BASED SYSTEMS, 2015, 73 :289-297
[28]   Toward integrating feature selection algorithms for classification and clustering [J].
Liu, H ;
Yu, L .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2005, 17 (04) :491-502
[29]   On the suitability of resampling techniques for the class imbalance problem in credit scoring [J].
Marques, A. I. ;
Garcia, V. ;
Sanchez, J. S. .
JOURNAL OF THE OPERATIONAL RESEARCH SOCIETY, 2013, 64 (07) :1060-1070
[30]  
Mattison Rob -., 2001, Telecom churn management: The golden opportunity