Solving the class imbalance problem using a counterfactual method for data augmentation

被引:33
作者
Temraz, Mohammed [1 ,2 ]
Keane, Mark T. [1 ,2 ,3 ]
机构
[1] Univ Coll Dublin, Sch Comp Sci, Dublin 4, Ireland
[2] Univ Coll Dublin, Insight Ctr Data Analyt, Dublin 4, Ireland
[3] Univ Coll Dublin, VistaMilk SFI Res Ctr, Dublin 4, Ireland
来源
MACHINE LEARNING WITH APPLICATIONS | 2022年 / 9卷
基金
爱尔兰科学基金会;
关键词
Counterfactual; Class imbalance problem; Data augmentation; XAI; BORDERLINE-SMOTE; SAMPLING METHOD; CLASSIFICATION; EXPLANATIONS; ALGORITHM;
D O I
10.1016/j.mlwa.2022.100375
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning from class imbalanced datasets poses challenges for many machine learning algorithms. Many realworld domains are, by definition, class imbalanced by virtue of having a majority class that naturally has many more instances than its minority class (e.g., genuine bank transactions occur much more often than fraudulent ones). Many methods have been proposed to solve the class imbalance problem, among the most popular being oversampling techniques (such as SMOTE). These methods generate synthetic instances in the minority class, to balance the dataset, performing data augmentations that improve the performance of predictive machine learning (ML). In this paper, we advance a novel, data augmentation method (adapted from eXplainable AI), that generates synthetic, counterfactual instances in the minority class. Unlike other oversampling techniques, this method adaptively combines existing instances from the dataset, using actual feature -values rather than interpolating values between instances. Several experiments using four different classifiers and 25 datasets involving binary classes are reported, which show that this Counterfactual Augmentation (CFA) method generates useful synthetic datapoints in the minority class. The experiments also show that CFA is competitive with many other oversampling methods, many of which are variants of SMOTE. The basis for CFA's performance is discussed, along with the conditions under which it is likely to perform better or worse in future tests.
引用
收藏
页数:16
相关论文
共 81 条
[1]   The Inverse Classification Problem [J].
Aggarwal, Charu C. ;
Chen, Chen ;
Han, Jiawei .
JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2010, 25 (03) :458-468
[2]   HCAB-SMOTE: A Hybrid Clustered Affinitive Borderline SMOTE Approach for Imbalanced Data Binary Classification [J].
Al Majzoub, Hisham ;
Elgedawy, Islam ;
Akaydin, Oyku ;
Ulukok, Mehtap Kose .
ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2020, 45 (04) :3205-3222
[3]  
Alcalá-Fdez J, 2011, J MULT-VALUED LOG S, V17, P255
[4]  
[Anonymous], 2004, ACM SIGKDD Explorations Newsletter-Special issue on learning from imbalanced datasets, DOI [10.1145/1007730.1007741, DOI 10.1145/1007730.1007741]
[5]  
[Anonymous], 2006, Data Mining: Concepts and Techniques
[6]  
Asuncion A., 2007, Uci machine learning repository
[7]  
Bache K., 2013, UCI machine learning repository
[8]  
Batista Gustavo APA, 2004, ACM SIGKDD Explor Newsl, V6, P20, DOI [10.1145/1007730.1007735, DOI 10.1145/1007730.1007735]
[9]   Framework for extreme imbalance classification: SWIM-sampling with the majority class [J].
Bellinger, Colin ;
Sharma, Shiven ;
Japkowicz, Nathalie ;
Zaiane, Osmar R. .
KNOWLEDGE AND INFORMATION SYSTEMS, 2020, 62 (03) :841-866
[10]  
Bishop C., 2006, Pattern Recognition and Machine Learning