Generatively Inferential Co-Training for Unsupervised Domain Adaptation

被引:15
作者
Qin, Can [1 ]
Wang, Lichen [1 ]
Zhang, Yulun [1 ]
Fu, Yun [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW) | 2019年
基金
美国国家科学基金会;
关键词
D O I
10.1109/ICCVW.2019.00135
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Neural Networks (DNNs) have greatly boosted the performance on a wide range of computer vision and machine learning tasks. Despite such achievements, DNN is hungry for enormous high-quality (HQ) training data, which are expensive and time-consuming to collect. To tackle this challenge, domain adaptation (DA) could help learning a model by leveraging the knowledge of low quality (LQ) data (i.e., source domain), while generalizing well on label-scarce HQ data (i.e., target domain). However, existing methods have two problems. First, they mainly focus on the high-level feature alignment while neglecting low-level mismatch. Second, there exists a class-conditional distribution shift even features being well aligned. To solve these problems, we propose a novel Generatively Inferential Co-Training (GICT) frameworkfor Unsupervised Domain Adaptation (UDA). GICT is based on cross-domain feature generation and a specifically designed co-training strategy. Feature generation adapts the representation at low level by translating images across domains. Co-training is employed to bridge conditional distribution shift by assigning high-confident pseudo labels on target domain inferred from two distinct classifiers. Extensive experiments on multiple tasks including image classification and semantic segmentation demonstrate the effectiveness of GICT approach'.
引用
收藏
页码:1055 / 1064
页数:10
相关论文
共 48 条
  • [1] [Anonymous], 2018, AAAI
  • [2] [Anonymous], 2013, ICCV
  • [3] [Anonymous], 2017, COMPUTER VISION PATT
  • [4] [Anonymous], 2018, P EUR C COMP VIS ECC
  • [5] [Anonymous], 2014, UNSUPERVISED DOMAIN
  • [6] [Anonymous], 2015, INT C MACHINE LEARNI
  • [7] [Anonymous], 2016, ECCV
  • [8] [Anonymous], 2016, CVPR
  • [9] [Anonymous], 2014, ARXIV
  • [10] [Anonymous], 2017, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2017.789