CURL: Image Classification using co-training and Unsupervised Representation Learning

被引:6
|
作者
Bianco, Simone [1 ]
Ciocca, Gianluigi [1 ]
Cusano, Claudio [2 ]
机构
[1] Univ Milano Bicocca, Dept Informat Syst & Commun, I-20126 Milan, Italy
[2] Univ Pavia, Dept Elect Comp & Biomed Engn, Via Palestro 3, I-27100 Pavia, Italy
关键词
Image classification; Machine learning algorithms; Pattern analysis; Semi-supervised learning; OBJECT DETECTION; MATRIX; FUSION; SCENE; SHAPE;
D O I
10.1016/j.cviu.2016.01.003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we propose a strategy for semi-supervised image classification that leverages unsupervised representation learning and co-training. The strategy, that is called CURL from co-training and unsupervised representation learning, iteratively builds two classifiers on two different views of the data. The two views correspond to different representations learned from both labeled and unlabeled data and differ in the fusion scheme used to combine the image features. To assess the performance of our proposal, we conducted several experiments on widely used data sets for scene and object recognition. We considered three scenarios (inductive, transductive and self-taught learning) that differ in the strategy followed to exploit the unlabeled data. As image features we considered a combination of GIST, PHOG, and LBP as well as features extracted from a Convolutional Neural Network. Moreover, two embodiments of CURL are investigated: one using Ensemble Projection as unsupervised representation learning coupled with Logistic Regression, and one based on LapSVM. The results show that CURL clearly outperforms other supervised and semi-supervised learning methods in the state of the art. (C) 2016 Elsevier Inc. All rights reserved.
引用
收藏
页码:15 / 29
页数:15
相关论文
共 50 条
  • [1] Unsupervised improvement of visual detectors using co-training
    Levin, A
    Viola, P
    Freund, Y
    NINTH IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS I AND II, PROCEEDINGS, 2003, : 626 - 633
  • [2] Traffic Classification Using En-semble Learning and Co-training
    He, Haitao
    Che, Chunhui
    Ma, Feiteng
    Zhang, Jun
    Luo, Xiaonan
    PROCEEDINGS OF THE 8TH WSEAS INTERNATIONAL CONFERENCE ON APPLIED INFORMATICS AND COMMUNICATIONS, PTS I AND II: NEW ASPECTS OF APPLIED INFORMATICS AND COMMUNICATIONS, 2008, : 458 - +
  • [3] SEMI-SUPERVISED CO-TRAINING AND ACTIVE LEARNING FRAMEWORK FOR HYPERSPECTRAL IMAGE CLASSIFICATION
    Samiappan, Sathishkumar
    Moorhead, Robert J., II
    2015 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS), 2015, : 401 - 404
  • [4] Cyclic Learning Rate-Based Co-Training for Image Classification With Noisy Labels
    Zheng, Ying
    Gu, Yu
    Bai, Pingping
    Yuan, Dong
    Zhou, Siqi
    Lyu, Xin
    Chen, Ang
    IEEE ACCESS, 2025, 13 : 6292 - 6305
  • [5] Self-supervised Co-training for Video Representation Learning
    Han, Tengda
    Xie, Weidi
    Zisserman, Andrew
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [6] UNSUPERVISED CHANNEL ADAPTATION FOR LANGUAGE IDENTIFICATION USING CO-TRAINING
    Ganapathy, Sriram
    Omar, Mohamed
    Pelecanos, Jason
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 6857 - 6861
  • [7] Co-training for Demographic Classification Using Deep Learning from Label Proportions
    Ardehaly, Ehsan Mohammady
    Culotta, Aron
    2017 17TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2017), 2017, : 1017 - 1024
  • [8] Spatial co-training for semi-supervised image classification
    Hong, Yi
    Zhu, Weiping
    PATTERN RECOGNITION LETTERS, 2015, 63 : 59 - 65
  • [9] DCPE co-training for classification
    Xu, Jin
    He, Haibo
    Man, Hong
    NEUROCOMPUTING, 2012, 86 : 75 - 85
  • [10] Using Co-Training to Empower Active Learning
    Azad, Payam V.
    Yaslan, Yusuf
    2017 25TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2017,