The Unsymmetrical-Style Co-training

被引:0
|
作者
Wang, Bin [1 ]
Zhang, Harry [1 ]
Spencer, Bruce [2 ]
Guo, Yuanyuan [1 ]
机构
[1] Univ New Brunswick, Fac Comp Sci, POB 4400, Fredericton, NB E3B 5A3, Canada
[2] Natl Res Council Canada, Fredericton, NB E3B 9W4, Canada
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semi-supervised learning has attracted much attention over the past decade because it provides the advantage of combining unlabeled data with labeled data to improve the learning capability of models. Co-training is a representative paradigm of semi-supervised learning methods. Typically, some co-training style algorithms, such as co-training and co-EM, learn two classifiers based on two views of the instance space. But they have to satisfy the assumptions that these two views are sufficient and conditionally independent given the class labels. Other co-training style algorithms, such as multiple-learner, use two different underlying classifiers based on only a single view of the instance space. However, they could not utilize the labeled data effectively, and suffer from the early convergence. After analyzing various co-training style algorithms, we have found that all of these algorithms have symmetrical framework structures that are related to their constraints. In this paper, we propose a novel unsymmetrical-style method, which we call the unsymmetrical co-training algorithm. The unsymmetrical co-training algorithm combines the advantages of other co-training style algorithms and overcomes their disadvantages. Within our unsymmetrical structure, we apply two unsymmetrical classifiers, namely, the self-training classifier and the EM classifier, and then train these two classifiers in an unsymmetrical way. The unsymmetrical co-training algorithm not only avoids the constraint of the conditional independence assumption, but also overcomes the flaws of the early convergence and the ineffective utilization of labeled data. We conduct experiments to compare the performances of these co-training style algorithms. From the experimental results, we can see that the unsymmetrical co-training algorithm outperforms other co-training algorithms.
引用
收藏
页码:100 / 111
页数:12
相关论文
共 50 条
  • [1] On Co-training Style Algorithms
    Dong, Cailing
    Yin, Yilong
    Guo, Xinjian
    Yang, Gongping
    Zhou, Guangtong
    ICNC 2008: FOURTH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, VOL 7, PROCEEDINGS, 2008, : 196 - 201
  • [2] Analyzing co-training style algorithms
    Wang, Wei
    Zhou, Zhi-Hua
    MACHINE LEARNING: ECML 2007, PROCEEDINGS, 2007, 4701 : 454 - +
  • [3] DCPE Co-Training: Co-Training Based on Diversity of Class Probability Estimation
    Xu, Jin
    He, Haibo
    Man, Hong
    2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,
  • [4] Bayesian Co-Training
    Yu, Shipeng
    Krishnapuram, Balaji
    Rosales, Romer
    Rao, R. Bharat
    JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 2649 - 2680
  • [5] ROBUST CO-TRAINING
    Sun, Shiliang
    Jin, Feng
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2011, 25 (07) : 1113 - 1126
  • [6] Question classification based on co-training style semi-supervised learning
    Yu, Zhengtao
    Su, Lei
    Li, Lina
    Zhao, Quan
    Mao, Cunli
    Guo, Jianyi
    PATTERN RECOGNITION LETTERS, 2010, 31 (13) : 1975 - 1980
  • [7] Co-training for Policy Learning
    Song, Jialin
    Lanka, Ravi
    Yue, Yisong
    Ono, Masahiro
    35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 1191 - 1201
  • [8] Co-training with Credal Models
    Soullard, Yann
    Destercke, Sebastien
    Thouvenin, Indira
    ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, 2016, 9896 : 92 - 104
  • [9] A review of research on co-training
    Ning, Xin
    Wang, Xinran
    Xu, Shaohui
    Cai, Weiwei
    Zhang, Liping
    Yu, Lina
    Li, Wenfa
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023, 35 (18):
  • [10] DCPE co-training for classification
    Xu, Jin
    He, Haibo
    Man, Hong
    NEUROCOMPUTING, 2012, 86 : 75 - 85