Online Batch Selection for Enhanced Generalization in Imbalanced Datasets

被引:1
作者
Ioannou, George [1 ]
Alexandridis, Georgios [1 ]
Stafylopatis, Andreas [1 ]
机构
[1] Natl Tech Univ Athens, Sch Elect & Comp Engn, Artificial Intelligence & Learning Syst Lab, Zografos 15780, Greece
关键词
imbalance; batch selection; sampling; convergence speed; generalization; PREDICTION; SMOTE;
D O I
10.3390/a16020065
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Importance sampling, a variant of online sampling, is often used in neural network training to improve the learning process, and, in particular, the convergence speed of the model. We study, here, the performance of a set of batch selection algorithms, namely, online sampling algorithms that process small parts of the dataset at each iteration. Convergence is accelerated through the creation of a bias towards the learning of hard samples. We first consider the baseline algorithm and investigate its performance in terms of convergence speed and generalization efficiency. The latter, however, is limited in case of poor balancing of data sets. To alleviate this shortcoming, we propose two variations of the algorithm that achieve better generalization and also manage to not undermine the convergence speed boost offered by the original algorithm. Various data transformation techniques were tested in conjunction with the proposed scheme to develop an overall training method of the model and to ensure robustness in different training environments. An experimental framework was constructed using three naturally imbalanced datasets and one artificially imbalanced one. The results assess the advantage in convergence of the extended algorithm over the vanilla one, but, mostly, show better generalization performance in imbalanced data environments.
引用
收藏
页数:20
相关论文
共 47 条
  • [1] Alain G., 2015, arXiv
  • [2] [Anonymous], 2017, ACM, DOI [DOI 10.1145/3065386, DOI 10.2165/00129785-200404040-00005]
  • [3] Batista G. E., 2003, 2 BRAZ WORKSH BIOINF, V3, P10
  • [4] Batista GEAPA., 2004, ACM SIGKDD EXPL NEWS, V6, P20, DOI [DOI 10.1145/1007730.1007735, 10.1145/1007730.1007735, 10.1145/1007730.1007735.2]
  • [5] Bengio Y., 2009, P 26 ANN INT C MACH, P41, DOI DOI 10.1145/1553374.1553380
  • [6] Adaptive importance sampling to accelerate training of a neural probabilistic language model
    Bengio, Yoshua
    Senecal, Jean-Sebastien
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (04): : 713 - 722
  • [7] Bouchard G, 2016, Arxiv, DOI arXiv:1506.09016
  • [8] A systematic study of the class imbalance problem in convolutional neural networks
    Buda, Mateusz
    Maki, Atsuto
    Mazurowski, Maciej A.
    [J]. NEURAL NETWORKS, 2018, 106 : 249 - 259
  • [9] Chang HS, 2017, ADV NEUR IN, V30
  • [10] SMOTE: Synthetic minority over-sampling technique
    Chawla, Nitesh V.
    Bowyer, Kevin W.
    Hall, Lawrence O.
    Kegelmeyer, W. Philip
    [J]. 2002, American Association for Artificial Intelligence (16)