The Gradual Resampling Ensemble for mining imbalanced data streams with concept drift

被引:42
作者
Ren, Siqi [1 ]
Liao, Bo [1 ]
Zhu, Wen [1 ]
Li, Zeng [2 ]
Liu, Wei [1 ]
Li, Keqin [1 ,3 ]
机构
[1] Hunan Univ, Coll Informat Sci & Engn, Changsha 410082, Hunan, Peoples R China
[2] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230027, Anhui, Peoples R China
[3] SUNY Coll New Paltz, Dept Comp Sci, New Paltz, NY 12561 USA
基金
中国国家自然科学基金;
关键词
Concept drift; Data stream mining; Ensemble classifier; Class imbalance; CLASSIFIERS;
D O I
10.1016/j.neucom.2018.01.063
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Knowledge extraction from data streams has received increasing interest in recent years. However, most of the existing studies assume that the class distribution of data streams is relatively balanced. The reaction of concept drifts is more difficult if a data stream is class imbalanced. Current oversampling methods generally selectively absorb the previously received minority examples into the current minority set by evaluating similarities of past minority examples and the current minority set. However, the similarity evaluation is easily affected by data difficulty factors. Meanwhile, these oversampling techniques have ignored the majority class distribution, thus risking class overlapping. To overcome these issues, we propose an ensemble classifier called Gradual Resampling Ensemble (GRE). GRE could handle data streams which exhibit concept drifts and class imbalance. On the one hand, a selectively resampling method, where drifting data can be avoidable, is applied to select a part of previous minority examples for amplifying the current minority set. The disjuncts can be discovered by the DBSCAN clustering, and thus the influences of small disjuncts and outliers on the similarity evaluation can be avoidable. Only those minority examples with low probability of overlapping with the current majority set can be selected for resampling the current minority set. On the other hand, previous component classifiers are updated using latest instances. Thus, the ensemble could quickly adapt to a new condition, regardless types of concept drifts. Through the gradual oversampling of previous chunks using the current minority events, the class distribution of past chunks can be balanced. Favorable results in comparison to other algorithms suggest that GRE can maintain good performance on minority class, without sacrificing majority class performance. (c) 2018 Elsevier B.V. All rights reserved.
引用
收藏
页码:150 / 166
页数:17
相关论文
共 40 条
  • [1] [Anonymous], 2005, DATA STREAMS ALGORIT
  • [2] [Anonymous], THESIS
  • [3] [Anonymous], Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2003
  • [4] Baena-Garcia M., 2006, P 4 ECML PKDD INT WO, P77, DOI DOI 10.1007/978-3-642-23857-4_12
  • [5] MULTIDIMENSIONAL BINARY SEARCH TREES USED FOR ASSOCIATIVE SEARCHING
    BENTLEY, JL
    [J]. COMMUNICATIONS OF THE ACM, 1975, 18 (09) : 509 - 517
  • [6] Bifet A, 2010, J MACH LEARN RES, V11, P1601
  • [7] Bifet Albert, 2007, SIAM INT C DATA MINI
  • [8] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32
  • [9] Reacting to Different Types of Concept Drift: The Accuracy Updated Ensemble Algorithm
    Brzezinski, Dariusz
    Stefanowski, Jerzy
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 25 (01) : 81 - 94
  • [10] SMOTE: Synthetic minority over-sampling technique
    Chawla, Nitesh V.
    Bowyer, Kevin W.
    Hall, Lawrence O.
    Kegelmeyer, W. Philip
    [J]. 2002, American Association for Artificial Intelligence (16)