Using Co-Training to Empower Active Learning

被引:0
|
作者
Azad, Payam V. [1 ]
Yaslan, Yusuf [1 ]
机构
[1] Istanbul Tech Univ, Comp Engn Dept, Istanbul, Turkey
来源
2017 25TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU) | 2017年
关键词
Active Learning; co-training; machine learning; semi-supervised learning;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Active Learning and co-training are cases of semi supervised learning both are used when labeled data is scarce. Active learning attempts to improve learning model by querying over unlabeled data and the main challenge there, is to find the optimum instance query. And co-training tries to exploit two different feature sets to enlarge number of labeled data without any need to get external information. Several researches tried to couple these two methods and get best out of them and they achieve noteworthy results. But we have witnessed that using co-training and active learning in sequence architecture outperforms when they are working in parallel. Using them in sequence means we have used co-training techniques to just find the best queries for active learning, and not in learning process itself. We will demonstrate that it has better results than plain active learning and co-training and even current parallel architectures. For this work we have used different techniques to split data into two distinct datasets; we will also discuss about it alongside our query selection method.
引用
收藏
页数:4
相关论文
共 50 条
  • [21] Discriminative View Learning for Single View Co-Training
    Amand, Joseph St.
    Huan, Jun
    CIKM'16: PROCEEDINGS OF THE 2016 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2016, : 2221 - 2226
  • [22] Autoregressive Co-Training for Learning Discrete Speech Representations
    Yeh, Sung-Lin
    Tang, Hao
    INTERSPEECH 2022, 2022, : 5000 - 5004
  • [23] CO-TRAINING SUCCEEDS IN COMPUTATIONAL PARALINGUISTICS
    Zhang, Zixing
    Deng, Jun
    Schuller, Bjoern
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 8505 - 8509
  • [24] Co-Training for Handwritten Word Recognition
    Frinken, Volkmar
    Fischer, Andreas
    Bunke, Horst
    Fornes, Alicia
    11TH INTERNATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND RECOGNITION (ICDAR 2011), 2011, : 314 - 318
  • [25] Co-training with relevant random subspaces
    Yaslan, Yusuf
    Cataltepe, Zehra
    NEUROCOMPUTING, 2010, 73 (10-12) : 1652 - 1661
  • [26] Co-training study for Online Regression
    Sousa, Ricardo
    Gama, Joao
    33RD ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, 2018, : 529 - 531
  • [27] ROBUST CO-TRAINING
    Sun, Shiliang
    Jin, Feng
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2011, 25 (07) : 1113 - 1126
  • [28] Stacked co-training for semi-supervised multi-label learning
    Li, Jiaxuan
    Zhu, Xiaoyan
    Wang, Hongrui
    Zhang, Yu
    Wang, Jiayin
    INFORMATION SCIENCES, 2024, 677
  • [29] A Meta Learning-Based Approach for Zero-Shot Co-Training
    Zaks, Guy
    Katz, Gilad
    IEEE ACCESS, 2021, 9 : 146653 - 146666
  • [30] Gemini: A Dual-Task Co-training Model for Partial Label Learning
    Li, Beibei
    Shu, Senlin
    Jin, Beihong
    Xiang, Tao
    Zheng, Yiyuan
    ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT I, 2024, 14471 : 328 - 340