Cost-sensitive positive and unlabeled learning

被引:19
作者
Chen, Xiuhua [1 ]
Gong, Chen [1 ,2 ]
Yang, Jian [1 ,3 ]
机构
[1] Nanjing Univ Sci & Technol, Key Lab Intelligent Percept & Syst High Dimens In, Sch Comp Sci & Engn, PCA Lab,Minist Educ, Nanjing, Peoples R China
[2] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[3] Jiangsu Key Lab Image & Video Understanding Socia, Minist Educ, Peoples R China
关键词
Positive and Unlabeled learning (PU learning); Class imbalance; Cost-sensitive learning; Generalization bound; SMOTE;
D O I
10.1016/j.ins.2021.01.002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Positive and Unlabeled learning (PU learning) aims to train a binary classifier solely based on positively labeled and unlabeled data when negatively labeled data are absent or distributed too diversely. However, none of the existing PU learning methods takes the class imbalance problem into account, which significantly neglects the minority class and is likely to generate a biased classifier. Therefore, this paper proposes a novel algorithm termed "Cost-Sensitive Positive and Unlabeled learning" (CSPU) which imposes different misclassification costs on different classes when conducting PU classification. Specifically, we assign distinct weights to the losses caused by false negative and false positive examples, and employ double hinge loss to build our CSPU algorithm under the framework of empirical risk minimization. Theoretically, we analyze the computational complexity, and also derive a generalization error bound of CSPU which guarantees the good performance of our algorithm on test data. Empirically, we compare CSPU with the state-of-the-art PU learning methods on synthetic dataset, OpenML benchmark datasets, and real-world datasets. The results clearly demonstrate the superiority of the proposed CSPU to other comparators in dealing with class imbalanced tasks. (C) 2021 Elsevier Inc. All rights reserved.
引用
收藏
页码:229 / 245
页数:17
相关论文
共 50 条
[41]   Under-sampling class imbalanced datasets by combining clustering analysis and instance selection [J].
Tsai, Chih-Fong ;
Lin, Wei-Chao ;
Hu, Ya-Han ;
Yao, Guan-Ting .
INFORMATION SCIENCES, 2019, 477 :47-54
[42]   Neighbourhood-based undersampling approach for handling imbalanced and overlapped data [J].
Vuttipittayamongkol, Pattaramon ;
Elyan, Eyad .
INFORMATION SCIENCES, 2020, 509 :47-70
[43]   A hybrid evolutionary preprocessing method for imbalanced datasets [J].
Wong, Ginny Y. ;
Leung, Frank H. F. ;
Ling, Sai-Ho .
INFORMATION SCIENCES, 2018, 454 :161-177
[44]   AN EXTENSION OF KARMARKAR PROJECTIVE ALGORITHM FOR CONVEX QUADRATIC-PROGRAMMING [J].
YE, YY ;
TSE, E .
MATHEMATICAL PROGRAMMING, 1989, 44 (02) :157-179
[45]  
Yin J, 2020, AAAI CONF ARTIF INTE, V34, P6680
[46]  
Yu H., 2002, KDD
[47]   Cost-sensitive learning by cost-proportionate example weighting [J].
Zadrozny, B ;
Langford, J ;
Abe, N .
THIRD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, PROCEEDINGS, 2003, :435-442
[48]  
Zhang C., 2020, 34 AAAI C ART INT
[49]  
Zhang C, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P4250
[50]   Training cost-sensitive neural networks with methods addressing the class imbalance problem [J].
Zhou, ZH ;
Liu, XY .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2006, 18 (01) :63-77