Reducing bias to source samples for unsupervised domain adaptation

被引:12
作者
Ye, Yalan [1 ]
Huang, Ziwei [1 ]
Pan, Tongjie [1 ]
Li, Jingjing [1 ]
Shen, Heng Tao [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu, Peoples R China
关键词
Domain adaptation; Transfer learning; Generative adversarial network;
D O I
10.1016/j.neunet.2021.03.021
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised Domain Adaptation (UDA) makes predictions for the target domain data while labels are only available in the source domain. Lots of works in UDA focus on finding a common representation of the two domains via domain alignment, assuming that a classifier trained in the source domain can be generalized well to the target domain. Thus, most existing UDA methods only consider minimizing the domain discrepancy without enforcing any constraint on the classifier. However, due to the uniqueness of each domain, it is difficult to achieve a perfect common representation, especially when there is low similarity between the source domain and the target domain. As a consequence, the classifier is biased to the source domain features and makes incorrect predictions on the target domain. To address this issue, we propose a novel approach named reducing bias to source samples for unsupervised domain adaptation (RBDA) by jointly matching the distribution of the two domains and reducing the classifier's bias to source samples. Specifically, RBDA first conditions the adversarial networks with the cross-covariance of learned features and classifier predictions to match the distribution of two domains. Then to reduce the classifier's bias to source samples, RBDA is designed with three effective mechanisms: a mean teacher model to guide the training of the original model, a regularization term to regularize the model and an improved cross-entropy loss for better supervised information learning. Comprehensive experiments on several open benchmarks demonstrate that RBDA achieves state-of-the-art results, which show its effectiveness for unsupervised domain adaptation scenarios. (C) 2021 Elsevier Ltd. All rights reserved.
引用
收藏
页码:61 / 71
页数:11
相关论文
共 48 条
  • [1] [Anonymous], 2014, ABS14123474 ARXIV
  • [2] [Anonymous], 2016, UNSUPERVISED DOMAIN
  • [3] [Anonymous], 2017, ARXIV PREPRINT ARXIV
  • [4] A theory of learning from different domains
    Ben-David, Shai
    Blitzer, John
    Crammer, Koby
    Kulesza, Alex
    Pereira, Fernando
    Vaughan, Jennifer Wortman
    [J]. MACHINE LEARNING, 2010, 79 (1-2) : 151 - 175
  • [5] Progressive Feature Alignment for Unsupervised Domain Adaptation
    Chen, Chaoqi
    Xie, Weiping
    Huang, Wenbing
    Rong, Yu
    Ding, Xinghao
    Huang, Yue
    Xu, Tingyang
    Huang, Junzhou
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 627 - 636
  • [6] Chen M., 2020, ARXIV PREPRINT ARXIV
  • [7] Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation
    Lee, Chen-Yu
    Batra, Tanmay
    Baig, Mohammad Haris
    Ulbricht, Daniel
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10277 - 10287
  • [8] Cluster Alignment with a Teacher for Unsupervised Domain Adaptation
    Deng, Zhijie
    Luo, Yucen
    Zhu, Jun
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 9943 - 9952
  • [9] Fang X., 2020, NEURAL NETWORKS
  • [10] French G., 2017, ARXIV PREPRINT ARXIV