A Distance-Based Weighted Undersampling Scheme for Support Vector Machines and its Application to Imbalanced Classification

被引:120
作者
Kang, Qi [1 ]
Shi, Lei [1 ]
Zhou, MengChu [2 ,3 ]
Wang, XueSong [1 ]
Wu, Qidi [1 ]
Wei, Zhi [4 ]
机构
[1] Tongji Univ, Sch Elect & Informat Engn, Dept Control Sci & Engn, Shanghai 201804, Peoples R China
[2] Macau Univ Sci & Technol, Inst Syst Engn, Macau 999078, Peoples R China
[3] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
[4] New Jersey Inst Technol, Dept Comp Sci, Newark, NJ 07102 USA
关键词
Class imbalance; data distribution; Euclidean distance; support vector machine (SVM); undersampling; ENSEMBLE;
D O I
10.1109/TNNLS.2017.2755595
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A support vector machine (SVM) plays a prominent role in classic machine learning, especially classification and regression. Through its structural risk minimization, it has enjoyed a good reputation in effectively reducing overfitting, avoiding dimensional disaster, and not falling into local minima. Nevertheless, existing SVMs do not perform well when facing class imbalance and large-scale samples. Undersampling is a plausible alternative to solve imbalanced problems in some way, but suffers from soaring computational complexity and reduced accuracy because of its enormous iterations and random sampling process. To improve their classification performance in dealing with data imbalance problems, this work proposes a weighted undersampling (WU) scheme for SVM based on space geometry distance, and thus produces an improved algorithm named WU-SVM. In WU-SVM, majority samples are grouped into some subregions (SRs) and assigned different weights according to their Euclidean distance to the hyper plane. The samples in an SR with higher weight have more chance to be sampled and put to use in each learning iteration, so as to retain the data distribution information of original data sets as much as possible. Comprehensive experiments are performed to test WU-SVM via 21 binary-class and six multiclass publically available data sets. The results show that it well outperforms the state-of-the-art methods in terms of three popular metrics for imbalanced classification, i.e., area under the curve, F-Measure, and G-Mean.
引用
收藏
页码:4152 / 4165
页数:14
相关论文
共 46 条
  • [1] [Anonymous], 2004, ACM SIGKDD Explor. Newsl.
  • [2] [Anonymous], 2006, P IEEE C COMPUTER VI, DOI DOI 10.1109/CVPR.2006.301
  • [3] [Anonymous], 2001, P ACM INT MULT C EXH, DOI DOI 10.1145/500141.500159
  • [4] [Anonymous], 2002, Comput. Sci.
  • [5] Batista GE., 2004, ACM SIGKDD EXPL NEWS, V6, P20, DOI [DOI 10.1145/1007730.1007735, 10.1145/1007730.1007735]
  • [6] Bian Xia, 2010, Application Research of Computers, V27, P2425, DOI 10.3969/j.issn.1001-3695.2010.07.006
  • [7] SMOTE: Synthetic minority over-sampling technique
    Chawla, Nitesh V.
    Bowyer, Kevin W.
    Hall, Lawrence O.
    Kegelmeyer, W. Philip
    [J]. 2002, American Association for Artificial Intelligence (16)
  • [8] A Convex Formulation for Learning a Shared Predictive Structure from Multiple Tasks
    Chen, Jianhui
    Tang, Lei
    Liu, Jun
    Ye, Jieping
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (05) : 1025 - 1038
  • [9] A Supervised Learning and Control Method to Improve Particle Swarm Optimization Algorithms
    Dong, Wenyong
    Zhou, MengChu
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2017, 47 (07): : 1135 - 1148
  • [10] Drummond C., 2003, Workshop on learning from imbalanced datasets II, International Conference on Machine Learning, V11, P1