Margin calibration in SVM class-imbalanced learning

被引:53
作者
Yang, Chan-Yun [1 ]
Yang, Jr-Syu [2 ]
Wang, Jian-Jun [3 ]
机构
[1] Technol & Sci Inst No Taiwan, Dept Mech Engn, Taipei 11202, Taiwan
[2] Tamkang Univ, Dept Mech & Electromech Engn, Tamsui 25137, Taipei County, Taiwan
[3] Southwest Univ, Sch Math & Stat, Chongqing 400715, Peoples R China
关键词
Margin; Cost-sensitive learning; Class-imbalanced learning; Support vector machines; Classification; SUPPORT VECTOR MACHINES; CLASSIFICATION; KERNEL; CONSISTENCY;
D O I
10.1016/j.neucom.2009.08.006
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Imbalanced dataset learning is an important practical issue in machine learning, even in support vector machines (SVMs). In this study, a well known reference model for solving the problem proposed by Veropoulos et al., is first studied. From the aspect of loss function, the reference cost sensitive prototype is identified as a penalty-regularized model. Intuitively, the loss function can change not only the penalty but also the margin to recover the biased decision boundary. This study focuses mainly on the effect from the margin and then extends the model to a more general modification. As proposed in the prototype, the modification first adopts an inversed proportional regularized penalty to re-weight the imbalanced classes. In addition to the penalty regularization, the modification then employs a margin compensation to lead the margin to be lopsided, which enables the decision boundary drift. Two regularization factors, the penalty and margin. are hence suggested for achieving an unbiased classification. The margin compensation, associating with the penalty regularization, is here utilized to calibrate and refine the biased decision boundary to further reduce the bias. With the area under the receiver operating characteristic curve (AuROC) for examining the performance, the modification shows relative higher scores than the reference model, even though the optimal performance is achieved by the reference model. Some useful characteristics found empirically are also included, which may be convenient for the future applications. All the theoretical descriptions and experimental validations show the proposed model's potential to compete for highly unbiased accuracy in a complex imbalanced dataset. (C) 2009 Elsevier B.V. All rights reserved.
引用
收藏
页码:397 / 411
页数:15
相关论文
共 45 条
  • [1] Applying support vector machines to imbalanced datasets
    Akbani, R
    Kwek, S
    Japkowicz, N
    [J]. MACHINE LEARNING: ECML 2004, PROCEEDINGS, 2004, 3201 : 39 - 50
  • [2] [Anonymous], 2004, ACM Sigkdd Explorations Newsletter
  • [3] [Anonymous], 1999, Proceedings of the International Joint Conference on Artificial Intelligence
  • [4] [Anonymous], 2000, LIBSVM LIB SUPPORT V
  • [5] [Anonymous], 1999, Genetic Algorithms + Data Structures = Evolution Programs
  • [6] Asuncion Arthur, 2007, Uci machine learning repository
  • [7] BARTLETT PL, 2003, 638 UC DEP STAT
  • [8] A tutorial on Support Vector Machines for pattern recognition
    Burges, CJC
    [J]. DATA MINING AND KNOWLEDGE DISCOVERY, 1998, 2 (02) : 121 - 167
  • [9] Callut K, 2005, IEEE IJCNN, P1443
  • [10] Support vector machines for candidate nodules classification
    Campadelli, P
    Casiraghi, E
    Valentini, G
    [J]. NEUROCOMPUTING, 2005, 68 : 281 - 288