Predictive Adversarial Learning from Positive and Unlabeled Data

被引:0
作者
Hu, Wenpeng [1 ]
Le, Ran [2 ]
Liu, Bing [3 ,5 ]
Ji, Feng [4 ]
Ma, Jinwen [1 ]
Zhao, Dongyan [2 ]
Yan, Rui [2 ]
机构
[1] Peking Univ, Sch Math Sci, Dept Informat Sci, Beijing, Peoples R China
[2] Peking Univ, Wangxuan Inst Comp Technol, Beijing, Peoples R China
[3] Univ Illinois, Dept Comp Sci, Chicago, IL USA
[4] Alibaba Grp, Hangzhou, Peoples R China
[5] Peking Univ, Beijing, Peoples R China
来源
THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2021年 / 35卷
基金
国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper studies learning from positive and unlabeled examples, known as PU learning. It proposes a novel PU learning method called Predictive Adversarial Networks (PAN) based on GAN (Generative Adversarial Networks). GAN learns a generator to generate data (e.g., images) to fool a discriminator which tries to determine whether the generated data belong to a (positive) training class. PU learning can be casted as trying to identify (not generate) likely positive instances from the unlabeled set to fool a discriminator that determines whether the identified likely positive instances from the unlabeled set are indeed positive. However, directly applying GAN is problematic because GAN focuses on only the positive data. The resulting PU learning method will have high precision but low recall. We propose a new objective function based on KL-divergence. Evaluation using both image and text data shows that PAN outperforms state-of-the-art PU learning methods and also a direct adaptation of GAN for PU learning.
引用
收藏
页码:7806 / 7814
页数:9
相关论文
共 42 条
  • [1] [Anonymous], 2017, NIPS
  • [2] [Anonymous], 2018, CVPR, DOI DOI 10.1109/CVPR.2018.00267
  • [3] [Anonymous], 2018, CVPR
  • [4] [Anonymous], 2003, IJCAI
  • [5] Learning from positive and unlabeled data: a survey
    Bekker, Jessa
    Davis, Jesse
    [J]. MACHINE LEARNING, 2020, 109 (04) : 719 - 760
  • [6] Building text classifiers using positive and unlabeled examples
    Bing, L
    Yang, D
    Li, XL
    Lee, WS
    Yu, PS
    [J]. THIRD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, PROCEEDINGS, 2003, : 179 - 186
  • [7] Chan ZM, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P1931
  • [8] Chang S., 2016, KDD 2016
  • [9] Chen XF, 2020, IEEE INT VEH SYM, P1510, DOI 10.1109/IV47402.2020.9304689
  • [10] Chiaroni F., 2018, ICIP