A Boundary-Information-Based Oversampling Approach to Improve Learning Performance for Imbalanced Datasets

被引:1
作者
Li, Der-Chiang [1 ]
Shi, Qi-Shi [1 ]
Lin, Yao-San [2 ]
Lin, Liang-Sian [3 ]
机构
[1] Natl Cheng Kung Univ, Dept Ind & Informat Management, Univ Rd, Tainan 70101, Taiwan
[2] Nanyang Technol Univ, Singapore Ctr Chinese Language, Ghim Moh Rd, Singapore 279623, Singapore
[3] Natl Taipei Univ Nursing & Hlth Sci, Dept Informat Management, Ming Te Rd, Taipei 112303, Taiwan
关键词
boundary information; synthetic sample generation; imbalanced datasets; SUPPORT VECTOR MACHINE; SAMPLING METHOD; SMOTE; CLASSIFICATION; PREDICTION; ALGORITHM; NOISY;
D O I
10.3390/e24030322
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Oversampling is the most popular data preprocessing technique. It makes traditional classifiers available for learning from imbalanced data. Through an overall review of oversampling techniques (oversamplers), we find that some of them can be regarded as danger-information-based oversamplers (DIBOs) that create samples near danger areas to make it possible for these positive examples to be correctly classified, and others are safe-information-based oversamplers (SIBOs) that create samples near safe areas to increase the correct rate of predicted positive values. However, DIBOs cause misclassification of too many negative examples in the overlapped areas, and SIBOs cause incorrect classification of too many borderline positive examples. Based on their advantages and disadvantages, a boundary-information-based oversampler (BIBO) is proposed. First, a concept of boundary information that considers safe information and dangerous information at the same time is proposed that makes created samples near decision boundaries. The experimental results show that DIBOs and BIBO perform better than SIBOs on the basic metrics of recall and negative class precision; SIBOs and BIBO perform better than DIBOs on the basic metrics for specificity and positive class precision, and BIBO is better than both of DIBOs and SIBOs in terms of integrated metrics.
引用
收藏
页数:16
相关论文
共 49 条
  • [1] An Insider Data Leakage Detection Using One-Hot Encoding, Synthetic Minority Oversampling and Machine Learning Techniques
    Al-Shehari, Taher
    Alsowail, Rakan A.
    [J]. ENTROPY, 2021, 23 (10)
  • [2] Alcalá-Fdez J, 2011, J MULT-VALUED LOG S, V17, P255
  • [3] A proposal for evolutionary fuzzy systems using feature weighting: Dealing with overlapping in imbalanced datasets
    Alshomrani, Saleh
    Bawakid, Abdullah
    Shim, Seong-O
    Fernandez, Alberto
    Herrera, Francisco
    [J]. KNOWLEDGE-BASED SYSTEMS, 2015, 73 : 1 - 17
  • [4] Asuncion A., 2007, Uci machine learning repository
  • [5] MWMOTE-Majority Weighted Minority Oversampling Technique for Imbalanced Data Set Learning
    Barua, Sukarna
    Islam, Md. Monirul
    Yao, Xin
    Murase, Kazuyuki
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2014, 26 (02) : 405 - 425
  • [6] The use of the area under the roc curve in the evaluation of machine learning algorithms
    Bradley, AP
    [J]. PATTERN RECOGNITION, 1997, 30 (07) : 1145 - 1159
  • [7] Bunkhumpornpat C, 2009, LECT NOTES ARTIF INT, V5476, P475, DOI 10.1007/978-3-642-01307-2_43
  • [8] SMOTE: Synthetic minority over-sampling technique
    Chawla, Nitesh V.
    Bowyer, Kevin W.
    Hall, Lawrence O.
    Kegelmeyer, W. Philip
    [J]. 2002, American Association for Artificial Intelligence (16)
  • [9] Combating imbalance in network intrusion datasets
    Cieslak, David A.
    Chawla, Nitesh V.
    Striegel, Aaron
    [J]. 2006 IEEE INTERNATIONAL CONFERENCE ON GRANULAR COMPUTING, 2006, : 732 - +
  • [10] When is Undersampling Effective in Unbalanced Classification Tasks?
    Dal Pozzolo, Andrea
    Caelen, Olivier
    Bontempi, Gianluca
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2015, PT I, 2015, 9284 : 200 - 215