Cost-sensitive positive and unlabeled learning

被引:19
作者
Chen, Xiuhua [1 ]
Gong, Chen [1 ,2 ]
Yang, Jian [1 ,3 ]
机构
[1] Nanjing Univ Sci & Technol, Key Lab Intelligent Percept & Syst High Dimens In, Sch Comp Sci & Engn, PCA Lab,Minist Educ, Nanjing, Peoples R China
[2] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[3] Jiangsu Key Lab Image & Video Understanding Socia, Minist Educ, Peoples R China
关键词
Positive and Unlabeled learning (PU learning); Class imbalance; Cost-sensitive learning; Generalization bound; SMOTE;
D O I
10.1016/j.ins.2021.01.002
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Positive and Unlabeled learning (PU learning) aims to train a binary classifier solely based on positively labeled and unlabeled data when negatively labeled data are absent or distributed too diversely. However, none of the existing PU learning methods takes the class imbalance problem into account, which significantly neglects the minority class and is likely to generate a biased classifier. Therefore, this paper proposes a novel algorithm termed "Cost-Sensitive Positive and Unlabeled learning" (CSPU) which imposes different misclassification costs on different classes when conducting PU classification. Specifically, we assign distinct weights to the losses caused by false negative and false positive examples, and employ double hinge loss to build our CSPU algorithm under the framework of empirical risk minimization. Theoretically, we analyze the computational complexity, and also derive a generalization error bound of CSPU which guarantees the good performance of our algorithm on test data. Empirically, we compare CSPU with the state-of-the-art PU learning methods on synthetic dataset, OpenML benchmark datasets, and real-world datasets. The results clearly demonstrate the superiority of the proposed CSPU to other comparators in dealing with class imbalanced tasks. (C) 2021 Elsevier Inc. All rights reserved.
引用
收藏
页码:229 / 245
页数:17
相关论文
共 50 条
[1]  
Andersen M., 2013, CVXOPT: a python package for convex optimization
[2]  
[Anonymous], 2005, A tutorial review of RKHS methods in machine learning
[3]   THEORY OF REPRODUCING KERNELS [J].
ARONSZAJN, N .
TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY, 1950, 68 (MAY) :337-404
[4]  
Bartlett P. L., 2003, Journal of Machine Learning Research, V3, P463, DOI 10.1162/153244303321897690
[5]   Learning from positive and unlabeled data: a survey [J].
Bekker, Jessa ;
Davis, Jesse .
MACHINE LEARNING, 2020, 109 (04) :719-760
[6]  
Bekker J, 2018, AAAI CONF ARTIF INTE, P2712
[7]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[8]   Building text classifiers using positive and unlabeled examples [J].
Bing, L ;
Yang, D ;
Li, XL ;
Lee, WS ;
Yu, PS .
THIRD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, PROCEEDINGS, 2003, :179-186
[9]  
Bunkhumpornpat C, 2009, LECT NOTES ARTIF INT, V5476, P475, DOI 10.1007/978-3-642-01307-2_43
[10]  
Cao KD, 2019, ADV NEUR IN, V32