Online Batch Selection for Enhanced Generalization in Imbalanced Datasets

被引:1
作者
Ioannou, George [1 ]
Alexandridis, Georgios [1 ]
Stafylopatis, Andreas [1 ]
机构
[1] Natl Tech Univ Athens, Sch Elect & Comp Engn, Artificial Intelligence & Learning Syst Lab, Zografos 15780, Greece
关键词
imbalance; batch selection; sampling; convergence speed; generalization; PREDICTION; SMOTE;
D O I
10.3390/a16020065
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Importance sampling, a variant of online sampling, is often used in neural network training to improve the learning process, and, in particular, the convergence speed of the model. We study, here, the performance of a set of batch selection algorithms, namely, online sampling algorithms that process small parts of the dataset at each iteration. Convergence is accelerated through the creation of a bias towards the learning of hard samples. We first consider the baseline algorithm and investigate its performance in terms of convergence speed and generalization efficiency. The latter, however, is limited in case of poor balancing of data sets. To alleviate this shortcoming, we propose two variations of the algorithm that achieve better generalization and also manage to not undermine the convergence speed boost offered by the original algorithm. Various data transformation techniques were tested in conjunction with the proposed scheme to develop an overall training method of the model and to ensure robustness in different training environments. An experimental framework was constructed using three naturally imbalanced datasets and one artificially imbalanced one. The results assess the advantage in convergence of the extended algorithm over the vanilla one, but, mostly, show better generalization performance in imbalanced data environments.
引用
收藏
页数:20
相关论文
共 47 条
  • [11] SMOTEBoost: Improving prediction of the minority class in boosting
    Chawla, NV
    Lazarevic, A
    Hall, LO
    Bowyer, KW
    [J]. KNOWLEDGE DISCOVERY IN DATABASES: PKDD 2003, PROCEEDINGS, 2003, 2838 : 107 - 119
  • [12] AutoAugment: Learning Augmentation Strategies from Data
    Cubuk, Ekin D.
    Zoph, Barret
    Mane, Dandelion
    Vasudevan, Vijay
    Le, Quoc V.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 113 - 123
  • [13] Drummond C., 2003, P WORKSH LEARN IMB D, V11
  • [14] Elkan C., 2001, P 17 INT JOINT C ART, V17, P973
  • [15] EEG data augmentation: towards class imbalance problem in sleep staging tasks
    Fan, Jiahao
    Sun, Chenglu
    Chen, Chen
    Jiang, Xinyu
    Liu, Xiangyu
    Zhao, Xian
    Meng, Long
    Dai, Chenyun
    Chen, Wei
    [J]. JOURNAL OF NEURAL ENGINEERING, 2020, 17 (05)
  • [16] DISCRIMINATORY ANALYSIS - NONPARAMETRIC DISCRIMINATION - CONSISTENCY PROPERTIES
    FIX, E
    HODGES, JL
    [J]. INTERNATIONAL STATISTICAL REVIEW, 1989, 57 (03) : 238 - 247
  • [17] Graff C, 2017, UCI MACHINE LEARNING
  • [18] Graves A, 2017, PR MACH LEARN RES, V70
  • [19] On the Class Imbalance Problem
    Guo, Xinjian
    Yin, Yilong
    Dong, Cailing
    Yang, Gongping
    Zhou, Guangtong
    [J]. ICNC 2008: FOURTH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, VOL 4, PROCEEDINGS, 2008, : 192 - 201
  • [20] Borderline-SMOTE: A new over-sampling method in imbalanced data sets learning
    Han, H
    Wang, WY
    Mao, BH
    [J]. ADVANCES IN INTELLIGENT COMPUTING, PT 1, PROCEEDINGS, 2005, 3644 : 878 - 887